Algorithm-Dependent Generalization Bounds for Overparameterized Deep Residual Networks

10/07/2019
by   Spencer Frei, et al.
14

The skip-connections used in residual networks have become a standard architecture choice in deep learning due to the increased training stability and generalization performance with this architecture, although there has been limited theoretical understanding for this improvement. In this work, we analyze overparameterized deep residual networks trained by gradient descent following random initialization, and demonstrate that (i) the class of networks learned by gradient descent constitutes a small subset of the entire neural network function class, and (ii) this subclass of networks is sufficiently large to guarantee small training error. By showing (i) we are able to demonstrate that deep residual networks trained with gradient descent have a small generalization gap between training and test error, and together with (ii) this guarantees that the test error will be small. Our optimization and generalization guarantees require overparameterization that is only logarithmic in the depth of the network, while all known generalization bounds for deep non-residual networks have overparameterization requirements that are at least polynomial in the depth. This provides an explanation for why residual networks are preferable to non-residual ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2018

Residual Networks: Lyapunov Stability and Convex Decomposition

While training error of most deep neural networks degrades as the depth ...
research
05/25/2022

An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

Gradient descent or its variants are popular in training neural networks...
research
10/10/2022

A global analysis of global optimisation

Theoretical understanding of the training of deep neural networks has ma...
research
06/12/2019

Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian

Modern neural network architectures often generalize well despite contai...
research
11/03/2016

Demystifying ResNet

The Residual Network (ResNet), proposed in He et al. (2015), utilized sh...
research
02/06/2021

Improving Model and Search for Computer Go

The standard for Deep Reinforcement Learning in games, following Alpha Z...
research
03/18/2022

On the Generalization Mystery in Deep Learning

The generalization mystery in deep learning is the following: Why do ove...

Please sign up or login with your details

Forgot password? Click here to reset