Discrete linear-complexity reinforcement learning in continuous action spaces for Q-learning algorithms

07/16/2018
by   Peyman Tavallali, et al.
0

In this article, we sketch an algorithm that extends the Q-learning algorithms to the continuous action space domain. Our method is based on the discretization of the action space. Despite the commonly used discretization methods, our method does not increase the discretized problem dimensionality exponentially. We will show that our proposed method is linear in complexity when the discretization is employed. The variant of the Q-learning algorithm presented in this work, labeled as Finite Step Q-Learning (FSQ), can be deployed to both shallow and deep neural network architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2020

Quantum reinforcement learning in continuous action space

Quantum mechanics has the potential to speed up machine learning algorit...
research
10/10/2018

Parametrized Deep Q-Networks Learning: Reinforcement Learning with Discrete-Continuous Hybrid Action Space

Most existing deep reinforcement learning (DRL) frameworks consider eith...
research
10/17/2019

Adaptive Discretization for Episodic Reinforcement Learning in Metric Spaces

We present an efficient algorithm for model-free episodic reinforcement ...
research
10/13/2017

A simple data discretizer

Data discretization is an important step in the process of machine learn...
research
07/01/2020

Adaptive Discretization for Model-Based Reinforcement Learning

We introduce the technique of adaptive discretization to design efficien...
research
02/02/2022

Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization

Vector Quantization (VQ) is a method for discretizing latent representat...
research
03/09/2020

Zooming for Efficient Model-Free Reinforcement Learning in Metric Spaces

Despite the wealth of research into provably efficient reinforcement lea...

Please sign up or login with your details

Forgot password? Click here to reset