I’m a DPhil student supervised by Shimon Whiteson, funded by a DeepMind Doctoral Scholarship, studying deep reinforcement learning. In the long term, I am interested in creating agents capable of discovering and leveraging novel models of their world at multiple levels of abstraction for sample-efficient learning. In the short term, I think this can be practically accomplished by learning sample-efficient algorithms via sample-inefficient learning algorithms - e.g. model-free meta-learning. Other research interests that I hope will enable more deep RL in the “real world” are: memory architecture, multi-agent RL, and human interaction (especially applied to autonomous vehicles).
Previously, I completed my MS and BS in Computer Science at Brown University, researching human interaction with autonomous vehicles under Michael Littman, and I completed my pre-doc at Microsoft Research, studying long-term memory in reinforcement learning with Katja Hofmann. I’ve also worked in other areas, including planning, perception, and robotics, at Lyft, Adobe, and smaller startups.
For fun I’ve enjoyed taking pictures, writing philosophy, and skiing.
For more information: jakebeck.com