Momentum Residual Neural Networks

02/15/2021
by   Michael E. Sander, et al.
0

The training of deep residual neural networks (ResNets) with backpropagation has a memory cost that increases linearly with respect to the depth of the network. A simple way to circumvent this issue is to use reversible architectures. In this paper, we propose to change the forward rule of a ResNet by adding a momentum term. The resulting networks, momentum residual neural networks (MomentumNets), are invertible. Unlike previous invertible architectures, they can be used as a drop-in replacement for any existing ResNet block. We show that MomentumNets can be interpreted in the infinitesimal step size regime as second-order ordinary differential equations (ODEs) and exactly characterize how adding momentum progressively increases the representation capabilities of MomentumNets. Our analysis reveals that MomentumNets can learn any linear mapping up to a multiplicative factor, while ResNets cannot. In a learning to optimize setting, where convergence to a fixed point is required, we show theoretically and empirically that our method succeeds while existing invertible architectures fail. We show on CIFAR and ImageNet that MomentumNets have the same accuracy as ResNets, while having a much smaller memory footprint, and show that pre-trained MomentumNets are promising for fine-tuning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

m-RevNet: Deep Reversible Neural Networks with Momentum

In recent years, the connections between deep residual networks and firs...
research
05/29/2022

Do Residual Neural Networks discretize Neural Ordinary Differential Equations?

Neural Ordinary Differential Equations (Neural ODEs) are the continuous ...
research
06/10/2020

Interpolation between Residual and Non-Residual Networks

Although ordinary differential equations (ODEs) provide insights for des...
research
10/13/2021

How Does Momentum Benefit Deep Neural Networks Architecture Design? A Few Case Studies

We present and review an algorithmic and theoretical framework for impro...
research
08/07/2016

Residual CNDS

Convolutional Neural networks nowadays are of tremendous importance for ...
research
03/22/2022

Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum

Most convergence guarantees for stochastic gradient descent with momentu...
research
09/12/2017

Reversible Architectures for Arbitrarily Deep Residual Neural Networks

Recently, deep residual networks have been successfully applied in many ...

Please sign up or login with your details

Forgot password? Click here to reset