DeepAI AI Chat
Log In Sign Up

Transfer from Multiple MDPs

08/31/2011
by   Alessandro Lazaric, et al.
Inria
Politecnico di Milano
0

Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them into the training set used to solve a given target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/28/2018

Importance Weighted Transfer of Samples in Reinforcement Learning

We consider the transfer of experience samples (i.e., tuples < s, a, s',...
07/27/2022

Structural Similarity for Improved Transfer in Reinforcement Learning

Transfer learning is an increasingly common approach for developing perf...
12/03/2020

Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances

In today's digital world, we are confronted with an explosion of data an...
05/29/2022

Provable Benefits of Representational Transfer in Reinforcement Learning

We study the problem of representational transfer in RL, where an agent ...
09/10/2019

Transfer of Temporal Logic Formulas in Reinforcement Learning

Transferring high-level knowledge from a source task to a target task is...
05/26/2020

Time-Variant Variational Transfer for Value Functions

In most transfer learning approaches to reinforcement learning (RL) the ...
03/08/2021

A Taxonomy of Similarity Metrics for Markov Decision Processes

Although the notion of task similarity is potentially interesting in a w...