-
How isotropic kernels learn simple invariants
We investigate how the training curve of isotropic kernel methods depend...
read it
-
Disentangling feature and lazy learning in deep neural networks: an empirical study
Two distinct limits for deep learning as the net width h→∞ have been pro...
read it
-
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
How many training data are needed to learn a supervised task? It is ofte...
read it
-
Scaling description of generalization with number of parameters in deep learning
We provide a description for the evolution of the generalization perform...
read it
-
A jamming transition from under- to over-parametrization affects loss landscape and generalization
We argue that in fully-connected networks a phase transition delimits th...
read it
-
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Deep learning has been immensely successful at a variety of tasks, rangi...
read it

Stefano Spigler
