
Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions
We study the dynamics of optimization and the generalization properties ...
read it

Understanding How OverParametrization Leads to Acceleration: A case of learning a single teacher neuron
Overparametrization has become a popular technique in deep learning. It...
read it

Orthogonal OverParameterized Training
The inductive bias of a neural network is largely determined by the arch...
read it

The Effects of Mild Overparameterization on the Optimization Landscape of Shallow ReLU Neural Networks
We study the effects of mild overparameterization on the optimization l...
read it

Optimal Rate of Convergence for Deep Neural Network Classifiers under the TeacherStudent Setting
Classifiers built with neural networks handle largescale highdimension...
read it

Overparameterization as a Catalyst for Better Generalization of Deep ReLU network
To analyze deep ReLU network, we adopt a studentteacher setting in whic...
read it

Trainability and Datadependent Initialization of Overparameterized ReLU Neural Networks
A neural network is said to be overspecified if its representational po...
read it
A Local Convergence Theory for Mildly OverParameterized TwoLayer Neural Network
While overparameterization is widely believed to be crucial for the success of optimization for the neural networks, most existing theories on overparameterization do not fully explain the reason – they either work in the Neural Tangent Kernel regime where neurons don't move much, or require an enormous number of neurons. In practice, when the data is generated using a teacher neural network, even mildly overparameterized neural networks can achieve 0 loss and recover the directions of teacher neurons. In this paper we develop a local convergence theory for mildly overparameterized twolayer neural net. We show that as long as the loss is already lower than a threshold (polynomial in relevant parameters), all student neurons in an overparameterized twolayer neural network will converge to one of teacher neurons, and the loss will go to 0. Our result holds for any number of student neurons as long as it is at least as large as the number of teacher neurons, and our convergence rate is independent of the number of student neurons. A key component of our analysis is the new characterization of local optimization landscape – we show the gradient satisfies a special case of Lojasiewicz property which is different from local strong convexity or PL conditions used in previous work.
READ FULL TEXT
Comments
There are no comments yet.