Ctrl-Z: Recovering from Instability in Reinforcement Learning

10/09/2019
by   Vibhavari Dasagi, et al.
0

When learning behavior, training data is often generated by the learner itself; this can result in unstable training dynamics, and this problem has particularly important applications in safety-sensitive real-world control tasks such as robotics. In this work, we propose a principled and model-agnostic approach to mitigate the issue of unstable learning dynamics by maintaining a history of a reinforcement learning agent over the course of training, and reverting to the parameters of a previous agent whenever performance significantly decreases. We develop techniques for evaluating this performance through statistical hypothesis testing of continued improvement, and evaluate them on a standard suite of challenging benchmark tasks involving continuous control of simulated robots. We show improvements over state-of-the-art reinforcement learning algorithms in performance and robustness to hyperparameters, outperforming DDPG in 5 out of 6 evaluation environments and showing no decrease in performance with TD3, which is known to be relatively stable. In this way, our approach takes an important step towards increasing data efficiency and stability in training for real-world robotic applications.

READ FULL TEXT
research
09/15/2023

Sim-to-Real Brush Manipulation using Behavior Cloning and Reinforcement Learning

Developing proficient brush manipulation capabilities in real-world scen...
research
01/30/2023

Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning

Many real-world domains require safe decision making in the presence of ...
research
09/20/2018

Benchmarking Reinforcement Learning Algorithms on Real-World Robots

Through many recent successes in simulation, model-free reinforcement le...
research
01/07/2021

CoachNet: An Adversarial Sampling Approach for Reinforcement Learning

Despite the recent successes of reinforcement learning in games and robo...
research
03/03/2023

Hindsight States: Blending Sim and Real Task Elements for Efficient Reinforcement Learning

Reinforcement learning has shown great potential in solving complex task...
research
06/04/2021

Beyond Target Networks: Improving Deep Q-learning with Functional Regularization

Target networks are at the core of recent success in Reinforcement Learn...
research
11/09/2019

Robo-PlaNet: Learning to Poke in a Day

Recently, the Deep Planning Network (PlaNet) approach was introduced as ...

Please sign up or login with your details

Forgot password? Click here to reset