Beyond the storage capacity: data driven satisfiability transition

05/20/2020
by   Pietro Rotondo, et al.
0

Data structure has a dramatic impact on the properties of neural networks, yet its significance in the established theoretical frameworks is poorly understood. Here we compute the Vapnik-Chervonenkis entropy of a kernel machine operating on data grouped into equally labelled subsets. At variance with the unstructured scenario, entropy is non-monotonic in the size of the training set, and displays an additional critical point besides the storage capacity. Remarkably, the same behavior occurs in margin classifiers even with randomly labelled data, as is elucidated by identifying the synaptic volume encoding the transition. These findings reveal aspects of expressivity lying beyond the condensed description provided by the storage capacity, and they indicate the path towards more realistic bounds for the generalization error of neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2020

The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization

Modern deep learning models employ considerably more parameters than req...
research
01/12/2022

On neural network kernels and the storage capacity problem

In this short note, we reify the connection between work on the storage ...
research
07/05/2023

Absorbing Phase Transitions in Artificial Deep Neural Networks

Theoretical understanding of the behavior of infinitely-wide neural netw...
research
11/12/2021

Can neural networks predict dynamics they have never seen?

Neural networks have proven to be remarkably successful for a wide range...
research
08/02/2021

Generalization Properties of Stochastic Optimizers via Trajectory Analysis

Despite the ubiquitous use of stochastic optimization algorithms in mach...
research
08/18/2006

Parametrical Neural Networks and Some Other Similar Architectures

A review of works on associative neural networks accomplished during las...
research
03/10/2022

Transition to Linearity of Wide Neural Networks is an Emerging Property of Assembling Weak Models

Wide neural networks with linear output layer have been shown to be near...

Please sign up or login with your details

Forgot password? Click here to reset