DeepAI AI Chat
Log In Sign Up

Provably Efficient Multi-Task Reinforcement Learning with Model Transfer

by   Chicheng Zhang, et al.

We study multi-task reinforcement learning (RL) in tabular episodic Markov decision processes (MDPs). We formulate a heterogeneous multi-player RL problem, in which a group of players concurrently face similar but not necessarily identical MDPs, with a goal of improving their collective performance through inter-player information sharing. We design and analyze an algorithm based on the idea of model transfer, and provide gap-dependent and gap-independent upper and lower bounds that characterize the intrinsic complexity of the problem.


page 1

page 2

page 3

page 4


Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

In probably approximately correct (PAC) reinforcement learning (RL), an ...

Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP

Multi-task reinforcement learning is a rich paradigm where information f...

Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RL

Reinforcement learning (RL) in episodic, factored Markov decision proces...

Scaling Distributed Multi-task Reinforcement Learning with Experience Sharing

Recently, DARPA launched the ShELL program, which aims to explore how ex...

Oracle-Efficient Reinforcement Learning in Factored MDPs with Unknown Structure

We consider provably-efficient reinforcement learning (RL) in non-episod...

The Online Coupon-Collector Problem and Its Application to Lifelong Reinforcement Learning

Transferring knowledge across a sequence of related tasks is an importan...

A Tensor Network Approach to Finite Markov Decision Processes

Tensor network (TN) techniques - often used in the context of quantum ma...