Learn Dynamic-Aware State Embedding for Transfer Learning

01/06/2021
by   Kaige Yang, et al.
0

Transfer reinforcement learning aims to improve the sample efficiency of solving unseen new tasks by leveraging experiences obtained from previous tasks. We consider the setting where all tasks (MDPs) share the same environment dynamic except reward function. In this setting, the MDP dynamic is a good knowledge to transfer, which can be inferred by uniformly random policy. However, trajectories generated by uniform random policy are not useful for policy improvement, which impairs the sample efficiency severely. Instead, we observe that the binary MDP dynamic can be inferred from trajectories of any policy which avoids the need of uniform random policy. As the binary MDP dynamic contains the state structure shared over all tasks we believe it is suitable to transfer. Built on this observation, we introduce a method to infer the binary MDP dynamic on-line and at the same time utilize it to guide state embedding learning, which is then transferred to new tasks. We keep state embedding learning and policy learning separately. As a result, the learned state embedding is task and policy agnostic which makes it ideal for transfer learning. In addition, to facilitate the exploration over the state space, we propose a novel intrinsic reward based on the inferred binary MDP dynamic. Our method can be used out-of-box in combination with model-free RL algorithms. We show two instances on the basis of DQN and A2C. Empirical results of intensive experiments show the advantage of our proposed method in various transfer learning tasks.

READ FULL TEXT

page 7

page 9

research
09/05/2019

Learning Action-Transferable Policy with Action Embedding

Despite achieving great success on performance in various sequential dec...
research
09/21/2018

Target Transfer Q-Learning and Its Convergence Analysis

Q-learning is one of the most popular methods in Reinforcement Learning ...
research
09/14/2022

A Simple Approach for State-Action Abstraction using a Learned MDP Homomorphism

Animals are able to rapidly infer from limited experience when sets of s...
research
05/13/2021

Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings

This work studies the statistical limits of uniform convergence for offl...
research
07/12/2014

Extreme State Aggregation Beyond MDPs

We consider a Reinforcement Learning setup where an agent interacts with...
research
02/08/2023

Investigating the role of model-based learning in exploration and transfer

State of the art reinforcement learning has enabled training agents on t...
research
11/24/2017

Identifying Reusable Macros for Efficient Exploration via Policy Compression

Reinforcement Learning agents often need to solve not a single task, but...

Please sign up or login with your details

Forgot password? Click here to reset