Gradient Starvation: A Learning Proclivity in Neural Networks

11/18/2020
by   Mohammad Pezeshki, et al.
1

We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks. Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.

READ FULL TEXT
research
07/10/2020

Learning Unstable Dynamical Systems with Time-Weighted Logarithmic Loss

When training the parameters of a linear dynamical model, the gradient d...
research
09/18/2018

On the Learning Dynamics of Deep Neural Networks

While a lot of progress has been made in recent years, the dynamics of l...
research
11/04/2019

Persistency of Excitation for Robustness of Neural Networks

When an online learning algorithm is used to estimate the unknown parame...
research
06/22/2020

Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent

We prove the first superpolynomial lower bounds for learning one-layer n...
research
06/03/2022

A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features

An important characteristic of neural networks is their ability to learn...
research
10/11/2021

Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations

Understanding the training dynamics of deep learning models is perhaps a...
research
06/25/2022

On how to avoid exacerbating spurious correlations when models are overparameterized

Overparameterized models fail to generalize well in the presence of data...

Please sign up or login with your details

Forgot password? Click here to reset