
Training Integrable Parameterizations of Deep Neural Networks in the InfiniteWidth Limit
To theoretically understand the behavior of trained deep neural networks...
read it

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization
Many supervised machine learning methods are naturally cast as optimizat...
read it

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
The squared Wasserstein distance is a natural quantity to compare probab...
read it

Statistical and Topological Properties of Sliced Probability Divergences
The idea of slicing divergences has been proven to be successful when co...
read it

Implicit Bias of Gradient Descent for Wide Twolayer Neural Networks Trained with the Logistic Loss
Neural networks trained to minimize the logistic (a.k.a. crossentropy) ...
read it

Sparse Optimization on Measures with Overparameterized Gradient Descent
Minimizing a convex function of a measure with a sparsityinducing penal...
read it

HexaShrink, an exact scalable framework for hexahedral meshes with attributes and discontinuities: multiresolution rendering and storage of geoscience models
With huge data acquisition progresses realized in the past decades and a...
read it

A Note on Lazy Training in Supervised Differentiable Programming
In a series of recent theoretical works, it has been shown that strongly...
read it

Sample Complexity of Sinkhorn divergences
Optimal transport (OT) and maximum mean discrepancies (MMD) are now rout...
read it

On the Global Convergence of Gradient Descent for Overparameterized Models using Optimal Transport
Many tasks in machine learning and signal processing can be solved by mi...
read it

Quantum Optimal Transport for Tensor Field Processing
This article introduces a new notion of optimal transport (OT) between t...
read it
Lenaïc Chizat
is this you? claim profile