Prioritized Sequence Experience Replay

05/25/2019
by   Marc Brittain, et al.
0

Experience replay is widely used in deep reinforcement learning algorithms and allows agents to remember and learn from experiences from the past. In an effort to learn more efficiently, researchers proposed prioritized experience replay (PER) which samples important transitions more frequently. In this paper, we propose Prioritized Sequence Experience Replay (PSER) a framework for prioritizing sequences of experience in an attempt to both learn more efficiently and to obtain better performance. We compare performance of uniform, PER and PSER sampling techniques in DQN on the Atari 2600 benchmark and show DQN with PSER substantially outperforms PER and uniform sampling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2020

Bootstrapping a DQN Replay Memory with Synthetic Experiences

An important component of many Deep Reinforcement Learning algorithms is...
research
07/14/2020

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

Experience replay, which enables the agents to remember and reuse experi...
research
02/22/2021

Stratified Experience Replay: Correcting Multiplicity Bias in Off-Policy Reinforcement Learning

Deep Reinforcement Learning (RL) methods rely on experience replay to ap...
research
11/29/2021

Improving Experience Replay with Successor Representation

Prioritized experience replay is a reinforcement learning technique show...
research
06/28/2023

Curious Replay for Model-based Adaptation

Agents must be able to adapt quickly as an environment changes. We find ...
research
03/02/2018

Distributed Prioritized Experience Replay

We propose a distributed architecture for deep reinforcement learning at...
research
08/22/2022

Prioritizing Samples in Reinforcement Learning with Reducible Loss

Most reinforcement learning algorithms take advantage of an experience r...

Please sign up or login with your details

Forgot password? Click here to reset