Transfer from Multiple MDPs

08/31/2011
by   Alessandro Lazaric, et al.
0

Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them into the training set used to solve a given target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2018

Importance Weighted Transfer of Samples in Reinforcement Learning

We consider the transfer of experience samples (i.e., tuples < s, a, s',...
research
07/27/2022

Structural Similarity for Improved Transfer in Reinforcement Learning

Transfer learning is an increasingly common approach for developing perf...
research
05/29/2022

Provable Benefits of Representational Transfer in Reinforcement Learning

We study the problem of representational transfer in RL, where an agent ...
research
12/03/2020

Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances

In today's digital world, we are confronted with an explosion of data an...
research
05/26/2020

Time-Variant Variational Transfer for Value Functions

In most transfer learning approaches to reinforcement learning (RL) the ...
research
03/08/2021

A Taxonomy of Similarity Metrics for Markov Decision Processes

Although the notion of task similarity is potentially interesting in a w...
research
08/18/2019

VUSFA:Variational Universal Successor Features Approximator to Improve Transfer DRL for Target Driven Visual Navigation

In this paper, we show how novel transfer reinforcement learning techniq...

Please sign up or login with your details

Forgot password? Click here to reset