Implicit bias of deep linear networks in the large learning rate phase

11/25/2020
by   Wei Huang, et al.
8

Correctly choosing a learning rate (scheme) for gradient-based optimization is vital in deep learning since different learning rates may lead to considerable differences in optimization and generalization. As found by Lewkowycz et al. <cit.> recently, there is a learning rate phase with large stepsize named catapult phase, where the loss grows at the early stage of training, and optimization eventually converges to a flatter minimum with better generalization. While this phenomenon is valid for deep neural networks with mean squared loss, it is an open question whether logistic (cross-entropy) loss still has a catapult phase and enjoys better generalization ability. This work answeres this question by studying deep linear networks with logistic loss. We find that the large learning rate phase is closely related to the separability of data. The non-separable data results in the catapult phase, and thus flatter minimum can be achieved in this learning rate phase. We demonstrate empirically that this interpretation can be applied to real settings on MNIST and CIFAR10 datasets with the fact that the optimal performance is often found in this large learning rate phase.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2020

The large learning rate phase of deep learning: the catapult mechanism

The choice of initial learning rate can have a profound effect on the pe...
research
02/17/2021

Training Aware Sigmoidal Optimizer

Proper optimization of deep neural networks is an open research question...
research
05/15/2020

Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems

Learning rate schedule can significantly affect generalization performan...
research
06/05/2018

On layer-level control of DNN training and its impact on generalization

The generalization ability of a neural network depends on the optimizati...
research
01/18/2023

Catapult Dynamics and Phase Transitions in Quadratic Nets

Neural networks trained with gradient descent can undergo non-trivial ph...
research
05/27/2021

Training With Data Dependent Dynamic Learning Rates

Recently many first and second order variants of SGD have been proposed ...
research
10/18/2019

Scheduling the Learning Rate via Hypergradients: New Insights and a New Algorithm

We study the problem of fitting task-specific learning rate schedules fr...

Please sign up or login with your details

Forgot password? Click here to reset