Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss

02/11/2020
by   Lenaïc Chizat, et al.
2

Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activation and confirm the statistical benefits of this implicit bias.

READ FULL TEXT
research
06/13/2019

Gradient Descent Maximizes the Margin of Homogeneous Neural Networks

Recent works on implicit regularization have shown that gradient descent...
research
11/10/2022

Regression as Classification: Influence of Task Formulation on Neural Network Features

Neural networks can be trained to solve regression problems by using gra...
research
12/19/2018

A Note on Lazy Training in Supervised Differentiable Programming

In a series of recent theoretical works, it has been shown that strongly...
research
10/12/2018

On the Margin Theory of Feedforward Neural Networks

Past works have shown that, somewhat surprisingly, over-parametrization ...
research
10/07/2022

The Asymmetric Maximum Margin Bias of Quasi-Homogeneous Neural Networks

In this work, we explore the maximum-margin bias of quasi-homogeneous ne...
research
10/06/2021

On Margin Maximization in Linear and ReLU Networks

The implicit bias of neural networks has been extensively studied in rec...
research
05/17/2023

Understanding the Initial Condensation of Convolutional Neural Networks

Previous research has shown that fully-connected networks with small ini...

Please sign up or login with your details

Forgot password? Click here to reset