Implicit bias of deep linear networks in the large learning rate phase

11/25/2020
by   Wei Huang, et al.
8

Correctly choosing a learning rate (scheme) for gradient-based optimization is vital in deep learning since different learning rates may lead to considerable differences in optimization and generalization. As found by Lewkowycz et al. <cit.> recently, there is a learning rate phase with large stepsize named catapult phase, where the loss grows at the early stage of training, and optimization eventually converges to a flatter minimum with better generalization. While this phenomenon is valid for deep neural networks with mean squared loss, it is an open question whether logistic (cross-entropy) loss still has a catapult phase and enjoys better generalization ability. This work answeres this question by studying deep linear networks with logistic loss. We find that the large learning rate phase is closely related to the separability of data. The non-separable data results in the catapult phase, and thus flatter minimum can be achieved in this learning rate phase. We demonstrate empirically that this interpretation can be applied to real settings on MNIST and CIFAR10 datasets with the fact that the optimal performance is often found in this large learning rate phase.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset