A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

02/04/2021
by   Mo Zhou, et al.
0

While over-parameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on over-parameterization do not fully explain the reason – they either work in the Neural Tangent Kernel regime where neurons don't move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly over-parameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly over-parameterized two-layer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an over-parameterized two-layer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape – we show the gradient satisfies a special case of Lojasiewicz property which is different from local strong convexity or PL conditions used in previous work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods

While deep learning has outperformed other methods for various tasks, th...
research
10/04/2020

Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuron

Over-parametrization has become a popular technique in deep learning. It...
research
04/09/2020

Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the arch...
research
07/23/2019

Trainability and Data-dependent Initialization of Over-parameterized ReLU Neural Networks

A neural network is said to be over-specified if its representational po...
research
06/01/2020

The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks

We study the effects of mild over-parameterization on the optimization l...
research
01/19/2023

Convergence beyond the over-parameterized regime using Rayleigh quotients

In this paper, we present a new strategy to prove the convergence of dee...
research
09/30/2019

Over-parameterization as a Catalyst for Better Generalization of Deep ReLU network

To analyze deep ReLU network, we adopt a student-teacher setting in whic...

Please sign up or login with your details

Forgot password? Click here to reset