An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

05/25/2022
by   Shuyu Yin, et al.
0

Gradient descent or its variants are popular in training neural networks. However, in deep Q-learning with neural network approximation, a type of reinforcement learning, gradient descent (also known as Residual Gradient (RG)) is barely used to solve Bellman residual minimization problem. On the contrary, Temporal Difference (TD), an incomplete gradient descent method prevails. In this work, we perform extensive experiments to show that TD outperforms RG, that is, when the training leads to a small Bellman residual error, the solution found by TD has a better policy and is more robust against the perturbation of neural network parameters. We further use experiments to reveal a key difference between reinforcement learning and supervised learning, that is, a small Bellman residual error can correspond to a bad policy in reinforcement learning while the test loss function in supervised learning is a standard index to indicate the performance. We also empirically examine that the missing term in TD is a key reason why RG performs badly. Our work shows that the performance of a deep Q-learning solution is closely related to the training dynamics and how an incomplete gradient descent method can find a good policy is interesting for future study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2019

Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks

The skip-connections used in residual networks have become a standard ar...
research
09/10/2022

Gradient Descent Temporal Difference-difference Learning

Off-policy algorithms, in which a behavior policy differs from the targe...
research
12/18/2017

ES Is More Than Just a Traditional Finite-Difference Approximator

An evolution strategy (ES) variant recently attracted significant attent...
research
12/14/2018

Scaling shared model governance via model splitting

Currently the only techniques for sharing governance of a deep learning ...
research
10/14/2022

A Scalable Finite Difference Method for Deep Reinforcement Learning

Several low-bandwidth distributable black-box optimization algorithms ha...
research
06/24/2016

Is the Bellman residual a bad proxy?

This paper aims at theoretically and empirically comparing two standard ...
research
01/12/2021

A SOM-based Gradient-Free Deep Learning Method with Convergence Analysis

As gradient descent method in deep learning causes a series of questions...

Please sign up or login with your details

Forgot password? Click here to reset