Size-free generalization bounds for convolutional neural networks

05/29/2019
by   Philip M. Long, et al.
0

We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps; to our knowledge, they are the first bounds for deep convolutional networks with this property. We present experiments with CIFAR-10 and a scaled-down variant, along with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset