DeepAI AI Chat
Log In Sign Up

Transfer from Multiple MDPs

by   Alessandro Lazaric, et al.
Politecnico di Milano

Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them into the training set used to solve a given target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.


page 1

page 2

page 3

page 4


Importance Weighted Transfer of Samples in Reinforcement Learning

We consider the transfer of experience samples (i.e., tuples < s, a, s',...

Structural Similarity for Improved Transfer in Reinforcement Learning

Transfer learning is an increasingly common approach for developing perf...

Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances

In today's digital world, we are confronted with an explosion of data an...

Provable Benefits of Representational Transfer in Reinforcement Learning

We study the problem of representational transfer in RL, where an agent ...

Transfer of Temporal Logic Formulas in Reinforcement Learning

Transferring high-level knowledge from a source task to a target task is...

Time-Variant Variational Transfer for Value Functions

In most transfer learning approaches to reinforcement learning (RL) the ...

A Taxonomy of Similarity Metrics for Markov Decision Processes

Although the notion of task similarity is potentially interesting in a w...