
The Gaussian equivalence of generative models for learning with twolayer neural networks
Understanding the impact of data structure on learning in neural network...
read it

Dynamical meanfield theory for stochastic gradient descent in Gaussian mixture classification
We analyze in a closed form the learning dynamics of stochastic gradient...
read it

The role of regularization in classification of highdimensional noisy Gaussian mixture
We consider a highdimensional mixture of two Gaussians in the noisy reg...
read it

Passed & Spurious: analysing descent algorithms and local minima in spiked matrixtensor model
In this work we analyse quantitatively the interplay between the loss la...
read it

Generalisation error in learning with random features and the hidden manifold model
We study generalised linear regression and classification for a syntheti...
read it

Marvels and Pitfalls of the Langevin Algorithm in Noisy Highdimensional Inference
Gradientdescentbased algorithms and their stochastic versions have wid...
read it

Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Deep neural networks can achieve remarkable generalization performances ...
read it

TRAMP: Compositional Inference with TRee Approximate Message Passing
We introduce tramp, standing for TRee Approximate Message Passing, a pyt...
read it

Approximate Survey Propagation for Statistical Inference
Approximate message passing algorithm enjoyed considerable attention in ...
read it

Principled Training of Neural Networks with Direct Feedback Alignment
The backpropagation algorithm has long been the canonical training metho...
read it

Generalisation dynamics of online learning in overparameterised neural networks
Deep neural networks achieve stellar generalisation on a variety of prob...
read it

Rademacher complexity and spin glasses: A link between the replica and statistical theories of learning
Statistical learning theory provides bounds of the generalization gap, u...
read it

Dynamics of stochastic gradient descent for twolayer neural networks in the teacherstudent setup
Deep neural networks achieve stellar generalisation even when they have ...
read it

Lightintheloop: using a photonics coprocessor for scalable training of neural networks
As neural networks grow larger and more complex and datahungry, trainin...
read it

Asymptotic Errors for TeacherStudent Convex Generalized Linear Models (or : How to Prove Kabashima's Replica Formula)
There has been a recent surge of interest in the study of asymptotic rec...
read it

Approximate messagepassing for convex optimization with nonseparable penalties
We introduce an iterative optimization scheme for convex objectives cons...
read it

Modelling the influence of data structure on learning in neural networks
The lack of crisp mathematical models that capture the structure of real...
read it

Generalization error in highdimensional perceptrons: Approaching Bayes error with convex optimization
We consider a commonly studied supervised classification of a synthetic ...
read it

The committee machine: Computational to statistical gaps in learning a twolayers neural network
Heuristic tools from statistical physics have been used in the past to l...
read it

Exact asymptotics for phase retrieval and compressed sensing with random generative priors
We consider the problem of compressed sensing and of (realvalued) phase...
read it

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Despite being the workhorse of deep learning, the backpropagation algori...
read it

Entropy and mutual information in models of deep neural networks
We examine a class of deep learning models with a tractable method to co...
read it

Fundamental limits of detection in the spiked Wigner model
We study the fundamental limits of detecting the presence of an additive...
read it

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices
We consider the problem of learning a coefficient vector x_0 in R^N from...
read it

Who is Afraid of Big Bad Minima? Analysis of GradientFlow in a Spiked MatrixTensor Model
Gradientbased algorithms are effective for many machine learning tasks,...
read it

On the Universality of Noiseless Linear Estimation with Respect to the Measurement Matrix
In a noiseless linear estimation problem, one aims to reconstruct a vect...
read it

Phase Transitions, Optimal Errors and Optimality of MessagePassing in Generalized Linear Models
We consider generalized linear models (GLMs) where an unknown ndimensio...
read it

Streaming Bayesian inference: theoretical limits and minibatch approximate messagepassing
In statistical learning for realworld largescale data problems, one mu...
read it

A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines
Restricted Boltzmann machines (RBMs) are energybased neuralnetworks wh...
read it

MultiLayer Generalized Linear Estimation
We consider the problem of reconstructing a signal from multilayered (p...
read it

Phase transitions and optimal algorithms in highdimensional Gaussian mixture clustering
We consider the problem of Gaussian mixture clustering in the highdimen...
read it

Inferring Sparsity: Compressed Sensing using Generalized Restricted Boltzmann Machines
In this work, we consider compressed sensing reconstruction from M measu...
read it

Fast Randomized SemiSupervised Clustering
We consider the problem of clustering partially labeled data from a mini...
read it

Statistical physics of inference: Thresholds and algorithms
Many questions of fundamental interest in todays science can be formulat...
read it

Matrix Completion from Fewer Entries: Spectral Detectability and Rank Estimation
The completion of low rank matrices from few entries is a task with many...
read it

Training Restricted Boltzmann Machines via the ThoulessAndersonPalmer Free Energy
Restricted Boltzmann machines are undirected neural networks which have ...
read it

Phase Transitions in Sparse PCA
We study optimal estimation for sparse principal component analysis when...
read it

Approximate Message Passing with Restricted Boltzmann Machine Priors
Approximate Message Passing (AMP) has been shown to be an excellent stat...
read it

Phase transitions and sample complexity in Bayesoptimal matrix factorization
We analyse the matrix factorization problem. Given a noisy measurement o...
read it

Intensityonly optical compressive imaging using a multiply scattering material and a double phase retrieval approach
In this paper, the problem of compressive imaging is addressed using nat...
read it

Spectral redemption: clustering sparse networks
Spectral algorithms are classic approaches to clustering and community d...
read it

Model Selection for Degreecorrected Block Models
The proliferation of models for networks raises challenging problems of ...
read it

Comparative Study for Inference of Hidden Classes in Stochastic Block Models
Inference of hidden classes in stochastic block model is a classical pro...
read it

The Mutual Information in Random Linear Estimation Beyond i.i.d. Matrices
There has been definite progress recently in proving the variational sin...
read it

Estimation in the Spiked Wigner Model: A Short Proof of the Replica Formula
We consider the problem of estimating the rankone perturbation of a Wig...
read it

Rankone matrix estimation: analysis of algorithmic and information theoretic limits by the spatial coupling method
Factorizing lowrank matrices is a problem with many applications in mac...
read it

The spiked matrix model with generative priors
Using a lowdimensional parametrization of signals is a generic and powe...
read it

Hightemperature Expansions and Message Passing Algorithms
Improved meanfield technics are a central theme of statistical physics ...
read it

Kernel computations from largescale random features obtained by Optical Processing Units
Approximating kernel functions with random features (RFs)has been a succ...
read it

Blind calibration for compressed sensing: State evolution and an online algorithm
Compressed sensing, allows to acquire compressible signals with a small ...
read it
Florent Krzakala
verfied profile
Florent Krzakala is a professor at Sorbonne Université and a Researcher at Ecole Normale Superieure
in Paris. His research interests include Statistical Physics, Machine Learning, Statistics, Signal Processing, Computer Science and Computational Optics.
He leads the SPHINX team in Ecole Normale in Paris, and is the holder of the CFMENS Datascience chair, of a PRAIRIE Institute chair and is a fellow and member of ELLIS. He is also the funder and scientific advisor of the startup Lighton.
Visit Florent's website