Predicting Grokking Long Before it Happens: A look into the loss landscape of models which grok

This paper focuses on predicting the occurrence of grokking in neural networks, a phenomenon in which perfect generalization emerges long after signs of overfitting or memorization are observed. It has been reported that grokking can only be observed with certain hyper-parameters. This makes it critical to identify the parameters that lead to grokking. However, since grokking occurs after a large number of epochs, searching for the hyper-parameters that lead to it is time-consuming. In this paper, we propose a low-cost method to predict grokking without training for a large number of epochs. In essence, by studying the learning curve of the first few epochs, we show that one can predict whether grokking will occur later on. Specifically, if certain oscillations occur in the early epochs, one can expect grokking to occur if the model is trained for a much longer period of time. We propose using the spectral signature of a learning curve derived by applying the Fourier transform to quantify the amplitude of low-frequency components to detect the presence of such oscillations. We also present additional experiments aimed at explaining the cause of these oscillations and characterizing the loss landscape.

READ FULL TEXT

page 4

page 5

page 6

page 12

page 13

page 16

page 17

page 24

research
06/28/2019

Mise en abyme with artificial intelligence: how to predict the accuracy of NN, applied to hyper-parameter tuning

In the context of deep learning, the costliest phase from a computationa...
research
08/10/2023

Robust Asymmetric Loss for Multi-Label Long-Tailed Learning

In real medical data, training samples typically show long-tailed distri...
research
11/25/2019

When a Period Is Not a Full Stop: Light Curve Structure Reveals Fundamental Parameters of Cepheid and RR Lyrae Stars

The period of pulsation and the structure of the light curve for Cepheid...
research
02/26/2020

Overfitting in adversarially robust deep learning

It is common practice in deep learning to use overparameterized networks...
research
12/16/2021

Visualizing the Loss Landscape of Winning Lottery Tickets

The underlying loss landscapes of deep neural networks have a great impa...
research
08/28/2020

Predicting Training Time Without Training

We tackle the problem of predicting the number of optimization steps tha...
research
03/02/2023

Over-training with Mixup May Hurt Generalization

Mixup, which creates synthetic training instances by linearly interpolat...

Please sign up or login with your details

Forgot password? Click here to reset