Beyond Target Networks: Improving Deep Q-learning with Functional Regularization

06/04/2021
by   Alexandre Piché, et al.
0

Target networks are at the core of recent success in Reinforcement Learning. They stabilize the training by using old parameters to estimate the Q-values, but this also limits the propagation of newly-encountered rewards which could ultimately slow down the training. In this work, we propose an alternative training method based on functional regularization which does not have this deficiency. Unlike target networks, our method uses up-to-date parameters to estimate the target Q-values, thereby speeding up training while maintaining stability. Surprisingly, in some cases, we can show that target networks are a special, restricted type of functional regularizers. Using this approach, we show empirical improvements in sample efficiency and performance across a range of Atari and simulated robotics environments.

READ FULL TEXT

page 13

page 14

research
10/21/2022

Bridging the Gap Between Target Networks and Functional Regularization

Bootstrapping is behind much of the successes of Deep Reinforcement Lear...
research
04/20/2023

Two-Memory Reinforcement Learning

While deep reinforcement learning has shown important empirical success,...
research
06/26/2020

Transfer Learning via ℓ_1 Regularization

Machine learning algorithms typically require abundant data under a stat...
research
10/09/2019

Ctrl-Z: Recovering from Instability in Reinforcement Learning

When learning behavior, training data is often generated by the learner ...
research
04/14/2021

Learning Regularization Parameters of Inverse Problems via Deep Neural Networks

In this work, we describe a new approach that uses deep neural networks ...
research
02/23/2020

Periodic Q-Learning

The use of target networks is a common practice in deep reinforcement le...

Please sign up or login with your details

Forgot password? Click here to reset