Advantages and Limitations of using Successor Features for Transfer in Reinforcement Learning

07/31/2017
by   Lucas Lehnert, et al.
0

One question central to Reinforcement Learning is how to learn a feature representation that supports algorithm scaling and re-use of learned information from different tasks. Successor Features approach this problem by learning a feature representation that satisfies a temporal constraint. We present an implementation of an approach that decouples the feature representation from the reward function, making it suitable for transferring knowledge between domains. We then assess the advantages and limitations of using Successor Features for transfer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2018

Transfer with Model Features in Reinforcement Learning

A key question in Reinforcement Learning is which representation an agen...
research
07/18/2021

A New Representation of Successor Features for Transfer across Dissimilar Environments

Transfer in reinforcement learning is usually achieved through generalis...
research
10/16/2014

Domain-Independent Optimistic Initialization for Reinforcement Learning

In Reinforcement Learning (RL), it is common to use optimistic initializ...
research
12/02/2019

Using Laplacian Spectrum as Graph Feature Representation

Graphs possess exotic features like variable size and absence of natural...
research
06/08/2020

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Temporal-difference and Q-learning play a key role in deep reinforcement...
research
06/20/2020

On the Theory of Transfer Learning: The Importance of Task Diversity

We provide new statistical guarantees for transfer learning via represen...
research
06/30/2023

Feature Representation Learning for NL2SQL Generation Based on Coupling and Decoupling

The NL2SQL task involves parsing natural language statements into SQL qu...

Please sign up or login with your details

Forgot password? Click here to reset