DeepAI
Log In Sign Up

Implicit bias of deep linear networks in the large learning rate phase

11/25/2020
by   Wei Huang, et al.
8

Correctly choosing a learning rate (scheme) for gradient-based optimization is vital in deep learning since different learning rates may lead to considerable differences in optimization and generalization. As found by Lewkowycz et al. <cit.> recently, there is a learning rate phase with large stepsize named catapult phase, where the loss grows at the early stage of training, and optimization eventually converges to a flatter minimum with better generalization. While this phenomenon is valid for deep neural networks with mean squared loss, it is an open question whether logistic (cross-entropy) loss still has a catapult phase and enjoys better generalization ability. This work answeres this question by studying deep linear networks with logistic loss. We find that the large learning rate phase is closely related to the separability of data. The non-separable data results in the catapult phase, and thus flatter minimum can be achieved in this learning rate phase. We demonstrate empirically that this interpretation can be applied to real settings on MNIST and CIFAR10 datasets with the fact that the optimal performance is often found in this large learning rate phase.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/04/2020

The large learning rate phase of deep learning: the catapult mechanism

The choice of initial learning rate can have a profound effect on the pe...
05/15/2020

Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems

Learning rate schedule can significantly affect generalization performan...
02/21/2020

The Break-Even Point on Optimization Trajectories of Deep Neural Networks

The early phase of training of deep neural networks is critical for thei...
06/05/2018

On layer-level control of DNN training and its impact on generalization

The generalization ability of a neural network depends on the optimizati...
02/09/2022

Optimal learning rate schedules in high-dimensional non-convex optimization problems

Learning rate schedules are ubiquitously used to speed up and improve op...
05/27/2021

Training With Data Dependent Dynamic Learning Rates

Recently many first and second order variants of SGD have been proposed ...
10/18/2019

Scheduling the Learning Rate via Hypergradients: New Insights and a New Algorithm

We study the problem of fitting task-specific learning rate schedules fr...