Fast deep reinforcement learning using online adjustments from the past

10/18/2018
by   Steven Hansen, et al.
0

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVAis performant on a demonstration task and Atari games.

READ FULL TEXT
research
07/14/2021

Mixing Human Demonstrations with Self-Exploration in Experience Replay for Deep Reinforcement Learning

We investigate the effect of using human demonstration data in the repla...
research
07/18/2016

Playing Atari Games with Deep Reinforcement Learning and Human Checkpoint Replay

This paper introduces a novel method for learning how to play the most d...
research
03/06/2017

Neural Episodic Control

Deep reinforcement learning methods attain super-human performance in a ...
research
12/09/2019

Learning Sparse Representations Incrementally in Deep Reinforcement Learning

Sparse representations have been shown to be useful in deep reinforcemen...
research
09/13/2023

Attention Loss Adjusted Prioritized Experience Replay

Prioritized Experience Replay (PER) is a technical means of deep reinfor...
research
06/12/2019

Search on the Replay Buffer: Bridging Planning and Reinforcement Learning

The history of learning for control has been an exciting back and forth ...
research
11/21/2019

Memory-Efficient Episodic Control Reinforcement Learning with Dynamic Online k-means

Recently, neuro-inspired episodic control (EC) methods have been develop...

Please sign up or login with your details

Forgot password? Click here to reset