Memory-Efficient Episodic Control Reinforcement Learning with Dynamic Online k-means

by   Andrea Agostinelli, et al.
Imperial College London

Recently, neuro-inspired episodic control (EC) methods have been developed to overcome the data-inefficiency of standard deep reinforcement learning approaches. Using non-/semi-parametric models to estimate the value function, they learn rapidly, retrieving cached values from similar past states. In realistic scenarios, with limited resources and noisy data, maintaining meaningful representations in memory is essential to speed up the learning and avoid catastrophic forgetting. Unfortunately, EC methods have a large space and time complexity. We investigate different solutions to these problems based on prioritising and ranking stored states, as well as online clustering techniques. We also propose a new dynamic online k-means algorithm that is both computationally-efficient and yields significantly better performance at smaller memory sizes; we validate this approach on classic reinforcement learning environments and Atari games.


page 5

page 8

page 10

page 17


Neural Episodic Control

Deep reinforcement learning methods attain super-human performance in a ...

Fast deep reinforcement learning using online adjustments from the past

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep re...

Continuous Episodic Control

Non-parametric episodic memory can be used to quickly latch onto high-re...

Model-Based Episodic Memory Induces Dynamic Hybrid Controls

Episodic control enables sample efficiency in reinforcement learning by ...

Integrating Episodic Memory into a Reinforcement Learning Agent using Reservoir Sampling

Episodic memory is a psychology term which refers to the ability to reca...

Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear

To use deep reinforcement learning in the wild, we might hope for an age...

Smooth Q-learning: Accelerate Convergence of Q-learning Using Similarity

An improvement of Q-learning is proposed in this paper. It is different ...

Code Repositories

Please sign up or login with your details

Forgot password? Click here to reset