Stable Anderson Acceleration for Deep Learning

10/26/2021
by   Massimiliano Lupo Pasini, et al.
0

Anderson acceleration (AA) is an extrapolation technique designed to speed-up fixed-point iterations like those arising from the iterative training of DL models. Training DL models requires large datasets processed in randomly sampled batches that tend to introduce in the fixed-point iteration stochastic oscillations of amplitude roughly inversely proportional to the size of the batch. These oscillations reduce and occasionally eliminate the positive effect of AA. To restore AA's advantage, we combine it with an adaptive moving average procedure that smoothes the oscillations and results in a more regular sequence of gradient descent updates. By monitoring the relative standard deviation between consecutive iterations, we also introduce a criterion to automatically assess whether the moving average is needed. We applied the method to the following DL instantiations: (i) multi-layer perceptrons (MLPs) trained on the open-source graduate admissions dataset for regression, (ii) physics informed neural networks (PINNs) trained on source data to solve 2d and 100d Burgers' partial differential equations (PDEs), and (iii) ResNet50 trained on the open-source ImageNet1k dataset for image classification. Numerical results obtained using up to 1,536 NVIDIA V100 GPUs on the OLCF supercomputer Summit showed the stabilizing effect of the moving average on AA for all the problems above.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/06/2020

Projected iterations of fixed point type to solve nonlinear partial Volterra integro–differential equations

In this paper, we propose a method to approximate the fixed point of an ...
research
05/26/2022

An Acceleration of Fixed Point Iterations for M/G/1-type Markov Chains by Means of Relaxation Techniques

We present some accelerated variants of fixed point iterations for compu...
research
09/25/2019

PyDEns: a Python Framework for Solving Differential Equations with Neural Networks

Recently, a lot of papers proposed to use neural networks to approximate...
research
09/29/2021

Anderson Acceleration as a Krylov Method with Application to Asymptotic Convergence Analysis

Anderson acceleration is widely used for accelerating the convergence of...
research
04/23/2022

Competitive Physics Informed Networks

Physics Informed Neural Networks (PINNs) solve partial differential equa...
research
06/08/2022

Anderson acceleration with approximate calculations: applications to scientific computing

We provide rigorous theoretical bounds for Anderson acceleration (AA) th...
research
05/15/2022

Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent

In this paper, we study the statistical limits in terms of Sobolev norms...

Please sign up or login with your details

Forgot password? Click here to reset