Weight Prediction Boosts the Convergence of AdamW

02/01/2023
by   Lei Guan, et al.
0

In this paper, we introduce weight prediction into the AdamW optimizer to boost its convergence when training the deep neural network (DNN) models. In particular, ahead of each mini-batch training, we predict the future weights according to the update rule of AdamW and then apply the predicted future weights to do both forward pass and backward propagation. In this way, the AdamW optimizer always utilizes the gradients w.r.t. the future weights instead of current weights to update the DNN parameters, making the AdamW optimizer achieve better convergence. Our proposal is simple and straightforward to implement but effective in boosting the convergence of DNN training. We performed extensive experimental evaluations on image classification and language modeling tasks to verify the effectiveness of our proposal. The experimental results validate that our proposal can boost the convergence of AdamW and achieve better accuracy than AdamW when training the DNN models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2023

XGrad: Boosting Gradient-Based Optimizers With Weight Prediction

In this paper, we propose a general deep learning training framework XGr...
research
12/07/2018

Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training

Nonlinear conjugate gradient (NLCG) based optimizers have shown superior...
research
05/22/2023

Adaptive Gradient Prediction for DNN Training

Neural network training is inherently sequential where the layers finish...
research
12/24/2019

TRADI: Tracking deep neural network weight distributions

During training, the weights of a Deep Neural Network (DNN) are optimize...
research
05/25/2023

SING: A Plug-and-Play DNN Learning Technique

We propose SING (StabIlized and Normalized Gradient), a plug-and-play te...
research
02/17/2022

SWIM: Selective Write-Verify for Computing-in-Memory Neural Accelerators

Computing-in-Memory architectures based on non-volatile emerging memorie...
research
06/18/2020

Accelerating Training in Artificial Neural Networks with Dynamic Mode Decomposition

Training of deep neural networks (DNNs) frequently involves optimizing s...

Please sign up or login with your details

Forgot password? Click here to reset