
-
When Do Neural Networks Outperform Kernel Methods?
For a certain scaling of the initialization of stochastic gradient desce...
read it
-
The generalization error of random features regression: Precise asymptotics and double descent curve
Deep learning methods operate in regimes that defy the traditional stati...
read it
-
Limitations of Lazy Training of Two-layers Neural Networks
We study the supervised learning problem under either of the following t...
read it
-
Linearized two-layers neural networks in high dimension
We consider the problem of learning an unknown function f_ on the d-dime...
read it
-
Proximal algorithms for constrained composite optimization, with applications to solving low-rank SDPs
We study a family of (potentially non-convex) constrained optimization p...
read it
-
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
We consider learning two layer neural networks using stochastic gradient...
read it
-
TAP free energy, spin glasses, and variational inference
We consider the Sherrington-Kirkpatrick model of spin glasses with ferro...
read it
-
A Mean Field View of the Landscape of Two-Layers Neural Networks
Multi-layer neural networks are among the most powerful models in machin...
read it
-
The landscape of the spiked tensor model
We consider the problem of estimating a large rank-one tensor u^⊗ k∈( R...
read it
-
Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality
A number of statistical estimation problems can be addressed by semidefi...
read it
-
The Landscape of Empirical Risk for Non-convex Losses
Most high-dimensional estimation and prediction methods propose to minim...
read it