Generalization Bounds for Neural Networks: Kernels, Symmetry, and Sample Compression

11/05/2018
by   Christopher Snyder, et al.
0

Though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they demonstrate many intriguing phenomena related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven somewhat intractable, in part because of the repeated composition of nonlinearities induced by depth. We present a reparameterization of DNNs as a linear function of a particular feature map that is locally independent of the weights. This feature map transforms depth-dependencies into simple tensor products and maps each input to a discrete subset of the feature space. Then, in analogy with logistic regression, we propose a max-margin assumption that enables us to present a so-called sample compression representation of the neural network in terms of the discrete activation state of neurons induced by s "support vectors". We show how the number of support vectors relate to learning guarantees for neural networks through sample compression bounds, yielding a sample complexity O(ns/ϵ) for networks with n neurons. Additionally, this number of support vectors has monotonic dependence on width, depth, and label noise for simple networks trained on the MNIST dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2017

Size-Independent Sample Complexity of Neural Networks

We study the sample complexity of learning neural networks, by providing...
research
09/20/2019

Do Compressed Representations Generalize Better?

One of the most studied problems in machine learning is finding reasonab...
research
03/01/2022

Contrasting random and learned features in deep Bayesian linear regression

Understanding how feature learning affects generalization is among the f...
research
11/12/2021

Nonlinear Tensor Ring Network

The state-of-the-art deep neural networks (DNNs) have been widely applie...
research
07/06/2021

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

This paper develops simple feed-forward neural networks that achieve the...
research
01/18/2021

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

In this paper we approach the problem of unique and stable identifiabili...
research
04/14/2021

Do Neural Network Weights account for Classes Centers?

The exploitation of Deep Neural Networks (DNNs) as descriptors in featur...

Please sign up or login with your details

Forgot password? Click here to reset