Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions

06/27/2020
by   Stefano Sarao Mannelli, et al.
7

We study the dynamics of optimization and the generalization properties of one-hidden layer neural networks with quadratic activation function in the over-parametrized regime where the layer width m is larger than the input dimension d. We consider a teacher-student scenario where the teacher has the same structure as the student with a hidden layer of smaller width m^*< m. We describe how the empirical loss landscape is affected by the number n of data samples and the width m^* of the teacher network. In particular we determine how the probability that there be no spurious minima on the empirical loss depends on n, d, and m^*, thereby establishing conditions under which the neural network can in principle recover the teacher. We also show that under the same conditions gradient descent dynamics on the empirical loss converges and leads to small generalization error, i.e. it enables recovery in practice. Finally we characterize the time-convergence rate of gradient descent in the limit of a large number of samples. These results are confirmed by numerical experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2019

Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup

Deep neural networks achieve stellar generalisation even when they have ...
research
12/03/2019

Stationary Points of Shallow Neural Networks with Quadratic Activation Function

We consider the problem of learning shallow neural networks with quadrat...
research
11/08/2022

Finite Sample Identification of Wide Shallow Neural Networks with Biases

Artificial neural networks are functions depending on a finite number of...
research
08/07/2023

The Copycat Perceptron: Smashing Barriers Through Collective Learning

We characterize the equilibrium properties of a model of y coupled binar...
research
10/04/2020

Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuron

Over-parametrization has become a popular technique in deep learning. It...
research
02/15/2023

Spatially heterogeneous learning by a deep student machine

Despite the spectacular successes, deep neural networks (DNN) with a hug...
research
07/16/2017

Theoretical insights into the optimization landscape of over-parameterized shallow neural networks

In this paper we study the problem of learning a shallow artificial neur...

Please sign up or login with your details

Forgot password? Click here to reset