On Sparsity in Overparametrised Shallow ReLU Networks

06/18/2020
by   Jaume de Dios, et al.
0

The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer. The limit of infinitely wide networks provides an appealing route forward through the mean-field perspective, but a key challenge is to bring learning guarantees back to the finite-neuron setting, where practical algorithms operate. Towards closing this gap, and focusing on shallow neural networks, in this work we study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime. Specifically, we consider (i) a form of implicit regularisation obtained by injecting noise into training targets [Blanc et al. 19], and (ii) the variation-norm regularisation [Bach 17], compatible with the mean-field scaling. Under mild assumptions on the activation function (satisfied for instance with ReLUs), we establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation. We study the consequences of such property and describe the settings where one form of regularisation is favorable over the other.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2022

Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates

We consider optimizing two-layer neural networks in the mean-field regim...
research
08/21/2020

A Dynamical Central Limit Theorem for Shallow Neural Networks

Recent theoretical work has characterized the dynamics of wide shallow n...
research
06/29/2022

From Kernel Methods to Neural Networks: A Unifying Variational Formulation

The minimization of a data-fidelity term and an additive regularization ...
research
07/15/2020

Phase diagram for two-layer ReLU neural networks at infinite-width limit

How neural network behaves during the training over different choices of...
research
05/27/2020

On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime

We describe a necessary and sufficient condition for the convergence to ...
research
06/20/2023

Mean-field Analysis of Generalization Errors

We propose a novel framework for exploring weak and L_2 generalization e...
research
02/24/2022

Can the clocks tick together despite the noise? Stochastic simulations and analysis

The suprachiasmatic nucleus (SCN), also known as the circadian master cl...

Please sign up or login with your details

Forgot password? Click here to reset