Multi-Task Reinforcement Learning


Reinforcement Learning (RL) algorithms have achieved superhuman performance in several tasks. Some notable successes include mastering Atari, Go, Dota2, and Poker. Despite these successes, several important challenges are yet to be addressed. For example, RL algorithms are significantly data-hungry and can take a lot of time to train. Moreover, it is challenging to train RL agents that can perform several tasks at once. These shortcomings have limited the application of RL algorithms to more real-life problems. Multi-task RL approaches can help improve RL agents' sample efficiency -- the learning agent could use the experience from all the tasks and improve performance across all the tasks. The hope with multi-task RL approaches is to learn an agent that can perform n tasks, without requiring n times more resources to train (than learning n single task agents).

In this tutorial, we will look at some of the recent advances in multi-task RL. We will start by discussing the nuances of the relationship between single-task and multi-task setup. For example, a multi-task RL problem can be modeled as a single-task RL problem. In that setup, all the multi-task environments are considered different parts of a large single task environment. We will see how this observation can be used to develop some general components and strategies that can convert any single task RL algorithm into a multi-task agent. We will see how combining these components with well-known single-task RL algorithms works in practice and practical challenges with these approaches. We will also discuss some multi-task algorithms that treat all the tasks as independent (yet related) tasks (and not part of one single task) and study their strengths and limitations. We will look at some of the recently proposed benchmarks in this domain and how existing multi-task RL agents perform on them. We will conclude the tutorial with a discussion on some potentially useful future directions for tackling multi-task RL.


Shagun Sodhani is a Research Engineer in the Facebook AI Research Group. He is primarily interested in lifelong reinforcement learning - training AI systems that can interact with and learn from the physical world and consistently improve as they do so without forgetting the previous knowledge. He did his MS from Mila, University of Montreal., where he was supervised by Dr. Yoshua Bengio and Dr. Jian Tang.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google