Breaking the Deadly Triad with a Target Network

01/21/2021
by   Shangtong Zhang, et al.
0

The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear Q-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

Why Target Networks Stabilise Temporal Difference Methods

Integral to recent successes in deep reinforcement learning has been a c...
research
05/25/2017

Convergent Tree-Backup and Retrace with Function Approximation

Off-policy learning is key to scaling up reinforcement learning as it al...
research
02/11/2022

Regularized Q-learning

Q-learning is widely used algorithm in reinforcement learning community....
research
04/24/2019

Target-Based Temporal Difference Learning

The use of target networks has been a popular and key component of recen...
research
03/27/2020

A Distributional Analysis of Sampling-Based Reinforcement Learning Algorithms

We present a distributional approach to theoretical analyses of reinforc...
research
10/20/2019

Policy Learning for Malaria Control

Sequential decision making is a typical problem in reinforcement learnin...
research
02/16/2016

Q(λ) with Off-Policy Corrections

We propose and analyze an alternate approach to off-policy multi-step te...

Please sign up or login with your details

Forgot password? Click here to reset