-
Learning with Random Learning Rates
Hyperparameter tuning is a bothersome step in the training of deep learn...
read it
-
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
We propose a stochastic variant of the classical Polyak step-size (Polya...
read it
-
Adaptive Gradient Method with Resilience and Momentum
Several variants of stochastic gradient descent (SGD) have been proposed...
read it
-
Deep Reinforcement Learning using Cyclical Learning Rates
Deep Reinforcement Learning (DRL) methods often rely on the meticulous t...
read it
-
Second-order Information in First-order Optimization Methods
In this paper, we try to uncover the second-order essence of several fir...
read it
-
Disentangling Adaptive Gradient Methods from Learning Rates
We investigate several confounding factors in the evaluation of optimiza...
read it
-
On the Variance of the Adaptive Learning Rate and Beyond
The learning rate warmup heuristic achieves remarkable success in stabil...
read it
TDprop: Does Jacobi Preconditioning Help Temporal Difference Learning?
We investigate whether Jacobi preconditioning, accounting for the bootstrap term in temporal difference (TD) learning, can help boost performance of adaptive optimizers. Our method, TDprop, computes a per parameter learning rate based on the diagonal preconditioning of the TD update rule. We show how this can be used in both n-step returns and TD(λ). Our theoretical findings demonstrate that including this additional preconditioning information is, surprisingly, comparable to normal semi-gradient TD if the optimal learning rate is found for both via a hyperparameter search. In Deep RL experiments using Expected SARSA, TDprop meets or exceeds the performance of Adam in all tested games under near-optimal learning rates, but a well-tuned SGD can yield similar improvements – matching our theory. Our findings suggest that Jacobi preconditioning may improve upon typical adaptive optimization methods in Deep RL, but despite incorporating additional information from the TD bootstrap term, may not always be better than SGD.
READ FULL TEXT
Comments
There are no comments yet.