DeepAI AI Chat
Log In Sign Up

TempLe: Learning Template of Transitions for Sample Efficient Multi-task RL

02/16/2020
by   Yanchao Sun, et al.
32

Transferring knowledge among various environments is important to efficiently learn multiple tasks online. Most existing methods directly use the previously learned models or previously learned optimal policies to learn new tasks. However, these methods may be inefficient when the underlying models or optimal policies are substantially different across tasks. In this paper, we propose Template Learning (TempLe), the first PAC-MDP method for multi-task reinforcement learning that could be applied to tasks with varying state/action space. TempLe generates transition dynamics templates, abstractions of the transition dynamics across tasks, to gain sample efficiency by extracting similarities between tasks even when their underlying models or optimal policies have limited commonalities. We present two algorithms for an "online" and a "finite-model" setting respectively. We prove that our proposed TempLe algorithms achieve much lower sample complexity than single-task learners or state-of-the-art multi-task methods. We show via systematically designed experiments that our TempLe method universally outperforms the state-of-the-art multi-task methods (PAC-MDP or not) in various settings and regimes.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/29/2016

Exploration for Multi-task Reinforcement Learning with Deep Generative Models

Exploration in multi-task reinforcement learning is critical in training...
09/26/2013

Sample Complexity of Multi-task Reinforcement Learning

Transferring knowledge across a sequence of reinforcement-learning tasks...
07/14/2020

Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP

Multi-task reinforcement learning is a rich paradigm where information f...
06/02/2023

Efficient Multi-Task and Transfer Reinforcement Learning with Parameter-Compositional Framework

In this work, we investigate the potential of improving multi-task train...
07/11/2023

Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing

Recently, DARPA launched the ShELL program, which aims to explore how ex...
04/01/2021

Sample-efficient Gear-ratio Optimization for Biomechanical Energy Harvester

The biomechanical energy harvester is expected to harvest the electric e...
03/07/2016

Learning Shared Representations in Multi-task Reinforcement Learning

We investigate a paradigm in multi-task reinforcement learning (MT-RL) i...