DeepAI AI Chat
Log In Sign Up

Efficient Multi-Task and Transfer Reinforcement Learning with Parameter-Compositional Framework

by   Lingfeng Sun, et al.
Horizon Robotics
berkeley college

In this work, we investigate the potential of improving multi-task training and also leveraging it for transferring in the reinforcement learning setting. We identify several challenges towards this goal and propose a transferring approach with a parameter-compositional formulation. We investigate ways to improve the training of multi-task reinforcement learning which serves as the foundation for transferring. Then we conduct a number of transferring experiments on various manipulation tasks. Experimental results demonstrate that the proposed approach can have improved performance in the multi-task training stage, and further show effective transferring in terms of both sample efficiency and performance.


page 1

page 2

page 3

page 4


PaCo: Parameter-Compositional Multi-Task Reinforcement Learning

The purpose of multi-task reinforcement learning (MTRL) is to train a si...

Multi-task Hierarchical Adversarial Inverse Reinforcement Learning

Multi-task Imitation Learning (MIL) aims to train a policy capable of pe...

Sample Complexity of Multi-task Reinforcement Learning

Transferring knowledge across a sequence of reinforcement-learning tasks...

Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP

Multi-task reinforcement learning is a rich paradigm where information f...

Attentive Multi-Task Deep Reinforcement Learning

Sharing knowledge between tasks is vital for efficient learning in a mul...

TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

Transferring knowledge among various environments is important to effici...

Exploration for Multi-task Reinforcement Learning with Deep Generative Models

Exploration in multi-task reinforcement learning is critical in training...