Normality-Guided Distributional Reinforcement Learning for Continuous Control

08/28/2022
by   Ju-Seung Byun, et al.
0

Learning a predictive model of the mean return, or value function, plays a critical role in many reinforcement learning algorithms. Distributional reinforcement learning (DRL) methods instead model the value distribution, which has been shown to improve performance in many settings. In this paper, we model the value distribution as approximately normal using the Markov Chain central limit theorem. We analytically compute quantile bars to provide a new DRL target that is informed by the decrease in standard deviation that occurs over the course of an episode. In addition, we suggest an exploration strategy based on how closely the learned value distribution resembles the target normal distribution to make the value function more accurate for better policy improvement. The approach we outline is compatible with many DRL structures. We use proximal policy optimization as a testbed and show that both the normality-guided target and exploration bonus produce performance improvements. We demonstrate our method outperforms DRL baselines on a number of continuous control tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2017

Distributional Reinforcement Learning with Quantile Regression

In reinforcement learning an agent interacts with the environment by tak...
research
01/08/2020

Sample-based Distributional Policy Gradient

Distributional reinforcement learning (DRL) is a recent reinforcement le...
research
03/23/2023

Policy Evaluation in Distributional LQR

Distributional reinforcement learning (DRL) enhances the understanding o...
research
01/06/2023

Centralized Cooperative Exploration Policy for Continuous Control Tasks

The deep reinforcement learning (DRL) algorithm works brilliantly on sol...
research
03/27/2020

A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms

We present a distributional approach to theoretical analyses of reinforc...
research
05/13/2021

Principled Exploration via Optimistic Bootstrapping and Backward Induction

One principled approach for provably efficient exploration is incorporat...
research
11/17/2020

Leveraging the Variance of Return Sequences for Exploration Policy

This paper introduces a method for constructing an upper bound for explo...

Please sign up or login with your details

Forgot password? Click here to reset