We are excited about 8 accepted papers with WhiRL members, and look forward to discussing our work at NeurIPS 2019 in Vancouver! Camera ready versions of all papers will be available soon.“Generalized Off-Policy Actor-Critic” – Shangtong Zhang, Wendelin Boehmer, Shimon Whiteson (https://arxiv.org/abs/1903.11329)“DAC: The Double Actor-Critic Architecture for Learning Options” – Shangtong Zhang, Shimon Whiteson (https://arxiv.org/abs/1904.12691)“Fast Efficient Hyperparameter Tuning for Policy Gradient Methods” – Supratik Paul, Vitaly Kurin, Shimon Whiteson (https://arxiv.org/abs/1902.06583)“VIREL: A Variational Inference Framework for Reinforcement Learning” – Matthew Fellows, [...]
To be successful in real-world tasks, Reinforcement Learning (RL) needs to exploit the compositional, relational, and hierarchical structure of the world, and learn to transfer it to the task at hand. Recent advances in representation learning for language make it possible to build models that acquire world knowledge from text corpora and integrate this knowledge into downstream decision making problems. We thus argue that the time is right to investigate a tight integration of natural language understanding into RL [...]
WhiRL has four accepted papers at ICML this year! Camera ready versions can be found here:“A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs” – Jingkai Mao‚ Jakob Foerster‚ Tim Rocktäschel‚ Maruan Al−Shedivat‚ Gregory Farquhar and Shimon Whiteson“Fingerprint Policy Optimisation for Robust Reinforcement Learning” – Supratik Paul‚ Michael A. Osborne and Shimon Whiteson“Fast Context Adaptation via Meta−Learning” – Luisa Zintgraf‚ Kyriacos Shiarlis‚ Vitaly Kurin‚ Katja Hofmann and Shimon Whiteson“Bayesian Action Decoder for Deep Multi−Agent Reinforcement Learning” [...]
Tim Rocktäschel, Jakob Foerster and Greg Farquhar Every year we get contacted by students who wish to work on short-term machine learning research projects with us. By now, we have supervised a good number of them and we noticed that some of the advice that we gave followed a few recurring principles. In this post, we share what we believe is good advice for a master’s thesis project or a summer research internship in machine learning. This post is by [...]
All our five submissions for ICML 2018 have just been accepted: DiCE: The Infinitely Differentiable Monte Carlo Estimator Jakob Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rocktäschel, Eric Xing, Shimon Whiteson QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, Shimon Whiteson Fourier Policy Gradients Matthew Fellows, Kamil Ciosek, Shimon Whiteson Deep Variational Reinforcement Learning for POMDPs Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, Shimon Whiteson TACO: Learning Task Decomposition via Temporal Alignment for Control Kyriacos Shiarlis, [...]
Our lab member Kamil Ciosek has published a video of his presentation of his recent paper “Expected Policy Gradients” (together with Shimon Whiteson) at AAAI-18.
We’re happy to announce that WhiRL members have two accepted papers at #AAMAS2018 this year.Learning with Opponent-Learning Awareness Jakob N. Foerster, Richard Y. Chen, Maruan Al-Shedivat, Shimon Whiteson, Pieter Abbeel, Igor Mordatch. Arxiv version: https://arxiv.org/abs/1709.04326 Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making, Luisa M Zintgraf, Diederik M Roijers, Sjoerd Linders, Catholijn M Jonker, Ann Nowé (arxiv.org/abs/1802.07606 )