Rademacher complexity and spin glasses: A link between the replica and statistical theories of learning

by   Alia Abbara, et al.

Statistical learning theory provides bounds of the generalization gap, using in particular the Vapnik-Chervonenkis dimension and the Rademacher complexity. An alternative approach, mainly studied in the statistical physics literature, is the study of generalization in simple synthetic-data models. Here we discuss the connections between these approaches and focus on the link between the Rademacher complexity in statistical learning and the theories of generalization for typical-case synthetic models from statistical physics, involving quantities known as Gardner capacity and ground state energy. We show that in these models the Rademacher complexity is closely related to the ground state energy computed by replica theories. Using this connection, one may reinterpret many results of the literature as rigorous Rademacher bounds in a variety of models in the high-dimensional statistics limit. Somewhat surprisingly, we also show that statistical learning theory provides predictions for the behavior of the ground-state energies in some full replica symmetry breaking models.



page 1

page 2

page 3

page 4


A Quantum Field Theory of Representation Learning

Continuous symmetries and their breaking play a prominent role in contem...

Variational wavefunctions for Sachdev-Ye-Kitaev models

Given a class of q-local Hamiltonians, is it possible to find a simple v...

Toward an AI Physicist for Unsupervised Learning

We investigate opportunities and challenges for improving unsupervised m...

The Nonconvex Geometry of Linear Inverse Problems

The gauge function, closely related to the atomic norm, measures the com...

Complex energy landscapes in spiked-tensor and simple glassy models: ruggedness, arrangements of local minima and phase transitions

We study rough high-dimensional landscapes in which an increasingly stro...

Algorithmic Foundations for the Diffraction Limit

For more than a century and a half it has been widely-believed (but was ...

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization

We consider a commonly studied supervised classification of a synthetic ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.