Size-free generalization bounds for convolutional neural networks

05/29/2019
by   Philip M. Long, et al.
0

We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps; to our knowledge, they are the first bounds for deep convolutional networks with this property. We present experiments with CIFAR-10 and a scaled-down variant, along with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

Multipod Convolutional Network

In this paper, we introduce a convolutional network which we call MultiP...
research
12/06/2013

Understanding Deep Architectures using a Recursive Convolutional Network

A key challenge in designing convolutional network models is sizing them...
research
05/23/2019

Some limitations of norm based generalization bounds in deep neural networks

Deep convolutional neural networks have been shown to be able to fit a l...
research
01/19/2016

Understanding Deep Convolutional Networks

Deep convolutional networks provide state of the art classifications and...
research
03/07/2019

Limiting Network Size within Finite Bounds for Optimization

Largest theoretical contribution to Neural Networks comes from VC Dimens...
research
12/01/2020

Problems of representation of electrocardiograms in convolutional neural networks

Using electrocardiograms as an example, we demonstrate the characteristi...
research
12/04/2019

Fantastic Generalization Measures and Where to Find Them

Generalization of deep networks has been of great interest in recent yea...

Please sign up or login with your details

Forgot password? Click here to reset