Quantum deep Q learning with distributed prioritized experience replay

04/19/2023
by   Samuel Yen-Chi Chen, et al.
2

This paper introduces the QDQN-DPER framework to enhance the efficiency of quantum reinforcement learning (QRL) in solving sequential decision tasks. The framework incorporates prioritized experience replay and asynchronous training into the training algorithm to reduce the high sampling complexities. Numerical simulations demonstrate that QDQN-DPER outperforms the baseline distributed quantum Q learning with the same model architecture. The proposed framework holds potential for more complex tasks while maintaining training efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2021

Deep Reinforcement Learning with Quantum-inspired Experience Replay

In this paper, a novel training paradigm inspired by quantum computation...
research
10/19/2019

Reverse Experience Replay

This paper describes an improvement in Deep Q-learning called Reverse Ex...
research
03/02/2018

Distributed Prioritized Experience Replay

We propose a distributed architecture for deep reinforcement learning at...
research
01/12/2023

Asynchronous training of quantum reinforcement learning

The development of quantum machine learning (QML) has received a lot of ...
research
02/13/2020

XCS Classifier System with Experience Replay

XCS constitutes the most deeply investigated classifier system today. It...
research
09/13/2023

Efficient quantum recurrent reinforcement learning via quantum reservoir computing

Quantum reinforcement learning (QRL) has emerged as a framework to solve...
research
02/07/2023

Towards Robust Inductive Graph Incremental Learning via Experience Replay

Inductive node-wise graph incremental learning is a challenging task due...

Please sign up or login with your details

Forgot password? Click here to reset