DeepAI AI Chat
Log In Sign Up

Robust Reinforcement Learning with Distributional Risk-averse formulation

by   Pierre Clavier, et al.

Robust Reinforcement Learning tries to make predictions more robust to changes in the dynamics or rewards of the system. This problem is particularly important when the dynamics and rewards of the environment are estimated from the data. In this paper, we approximate the Robust Reinforcement Learning constrained with a Φ-divergence using an approximate Risk-Averse formulation. We show that the classical Reinforcement Learning formulation can be robustified using standard deviation penalization of the objective. Two algorithms based on Distributional Reinforcement Learning, one for discrete and one for continuous action spaces are proposed and tested in a classical Gym environment to demonstrate the robustness of the algorithms.


page 1

page 2

page 3

page 4


Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning

Many real-world domains require safe decision making in the presence of ...

Improving Robustness via Risk Averse Distributional Reinforcement Learning

One major obstacle that precludes the success of reinforcement learning ...

Reinforcement Learning in Non-Markovian Environments

Following the novel paradigm developed by Van Roy and coauthors for rein...

Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection

In this study, we leverage the deliberate and systematic fault-injection...

Wasserstein Robust Reinforcement Learning

Reinforcement learning algorithms, though successful, tend to over-fit t...

A Reinforcement Learning Environment for Mathematical Reasoning via Program Synthesis

We convert the DeepMind Mathematics Dataset into a reinforcement learnin...

Distributional Reinforcement Learning with Ensembles

It is well-known that ensemble methods often provide enhanced performanc...