DeepAI AI Chat
Log In Sign Up

Implicit Regularization of Normalization Methods

by   Xiaoxia Wu, et al.

Normalization methods such as batch normalization are commonly used in overparametrized models like neural networks. Here, we study the weight normalization (WN) method (Salimans Kingma, 2016) and a variant called reparametrized projected gradient descent (rPGD) for overparametrized least squares regression and some more general loss functions. WN and rPGD reparametrize the weights with a scale g and a unit vector such that the objective function becomes non-convex. We show that this non-convex formulation has beneficial regularization effects compared to gradient descent on the original objective. We show that these methods adaptively regularize the weights and converge with exponential rate to the minimum ℓ_2 norm solution (or close to it) even for initializations far from zero. This is different from the behavior of gradient descent, which only converges to the min norm solution when started at zero, and is more sensitive to initialization. Some of our proof techniques are different from many related works; for instance we find explicit invariants along the gradient flow paths. We verify our results experimentally and suggest that there may be a similar phenomenon for nonlinear problems such as matrix sensing.


page 1

page 2

page 3

page 4


Theory III: Dynamics and Generalization in Deep Networks

We review recent observations on the dynamical systems induced by gradie...

Robust Implicit Regularization via Weight Normalization

Overparameterized models may have many interpolating solutions; implicit...

Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum

We analyze single-layer neural networks with the Xavier initialization i...

Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization

In this paper, we present some theoretical work to explain why simple gr...

Understanding symmetries in deep networks

Recent works have highlighted scale invariance or symmetry present in th...

Learning Sparse Neural Networks through L_0 Regularization

We propose a practical method for L_0 norm regularization for neural net...