DeepAI AI Chat
Log In Sign Up

On the training dynamics of deep networks with L_2 regularization

by   Aitor Lewkowycz, et al.

We study the role of L_2 regularization in deep learning, and uncover simple relations between the performance of the model, the L_2 coefficient, the learning rate, and the number of training steps. These empirical relations hold when the network is overparameterized. They can be used to predict the optimal regularization parameter of a given model. In addition, based on these observations we propose a dynamical schedule for the regularization parameter that improves performance and speeds up training. We test these proposals in modern image classification settings. Finally, we show that these empirical relations can be understood theoretically in the context of infinitely wide networks. We derive the gradient flow dynamics of such networks, and compare the role of L_2 regularization in this context with that of linear models.


The large learning rate phase of deep learning: the catapult mechanism

The choice of initial learning rate can have a profound effect on the pe...

DL-Reg: A Deep Learning Regularization Technique using Linear Regression

Regularization plays a vital role in the context of deep learning by pre...

Towards Efficient and Data Agnostic Image Classification Training Pipeline for Embedded Systems

Nowadays deep learning-based methods have achieved a remarkable progress...

Why neural networks find simple solutions: the many regularizers of geometric complexity

In many contexts, simpler models are preferable to more complex models a...

Implicit Regularization in Deep Learning: A View from Function Space

We approach the problem of implicit regularization in deep learning from...

Discovering Nonlinear Relations with Minimum Predictive Information Regularization

Identifying the underlying directional relations from observational time...