Graph Backup: Data Efficient Backup Exploiting Markovian Transitions

05/31/2022
by   Zhengyao Jiang, et al.
0

The successes of deep Reinforcement Learning (RL) are limited to settings where we have a large stream of online experiences, but applying RL in the data-efficient setting with limited access to online interactions is still challenging. A key to data-efficient RL is good value estimation, but current methods in this space fail to fully utilise the structure of the trajectory data gathered from the environment. In this paper, we treat the transition data of the MDP as a graph, and define a novel backup operator, Graph Backup, which exploits this graph structure for better value estimation. Compared to multi-step backup methods such as n-step Q-Learning and TD(λ), Graph Backup can perform counterfactual credit assignment and gives stable value estimates for a state regardless of which trajectory the state is sampled from. Our method, when combined with popular value-based methods, provides improved performance over one-step and multi-step methods on a suite of data-efficient RL benchmarks including MiniGrid, Minatar and Atari100K. We further analyse the reasons for this performance boost through a novel visualisation of the transition graphs of Atari games.

READ FULL TEXT
research
02/18/2021

Continuous Doubly Constrained Batch Reinforcement Learning

Reliant on too many experiments to learn good actions, current Reinforce...
research
11/30/2020

Continuous Transition: Improving Sample Efficiency for Continuous Control Problems via MixUp

Although deep reinforcement learning (RL) has been successfully applied ...
research
09/28/2022

Online Policy Optimization for Robust MDP

Reinforcement learning (RL) has exceeded human performance in many synth...
research
06/23/2020

The Effect of Multi-step Methods on Overestimation in Deep Reinforcement Learning

Multi-step (also called n-step) methods in reinforcement learning (RL) h...
research
09/26/2019

Harnessing Structures for Value-Based Planning and Reinforcement Learning

Value-based methods constitute a fundamental methodology in planning and...
research
10/10/2018

The Laplacian in RL: Learning Representations with Efficient Approximations

The smallest eigenvectors of the graph Laplacian are well-known to provi...
research
04/04/2022

Capturing positive utilities during the estimation of recursive logit models: A prism-based approach

Although the recursive logit (RL) model has been recently popular and ha...

Please sign up or login with your details

Forgot password? Click here to reset