Don't blame Dataset Shift! Shortcut Learning due to Gradients and Cross Entropy

08/24/2023
by   Aahlad Puli, et al.
0

Common explanations for shortcut learning assume that the shortcut improves prediction under the training distribution but not in the test distribution. Thus, models trained via the typical gradient-based optimization of cross-entropy, which we call default-ERM, utilize the shortcut. However, even when the stable feature determines the label in the training distribution and the shortcut does not provide any additional information, like in perception tasks, default-ERM still exhibits shortcut learning. Why are such solutions preferred when the loss for default-ERM can be driven to zero using the stable feature alone? By studying a linear perception task, we show that default-ERM's preference for maximizing the margin leads to models that depend more on the shortcut than the stable feature, even without overparameterization. This insight suggests that default-ERM's implicit inductive bias towards max-margin is unsuitable for perception tasks. Instead, we develop an inductive bias toward uniform margins and show that this bias guarantees dependence only on the perfect stable feature in the linear perception task. We develop loss functions that encourage uniform-margin solutions, called margin control (MARG-CTRL). MARG-CTRL mitigates shortcut learning on a variety of vision and language tasks, showing that better inductive biases can remove the need for expensive two-stage shortcut-mitigating methods in perception tasks.

READ FULL TEXT
research
10/27/2017

The Implicit Bias of Gradient Descent on Separable Data

We show that gradient descent on an unregularized logistic regression pr...
research
12/27/2018

Optimal Margin Distribution Network

Recent research about margin theory has proved that maximizing the minim...
research
10/26/2021

Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias

The generalization mystery of overparametrized deep nets has motivated e...
research
04/26/2022

PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions

Cross-entropy loss and focal loss are the most common choices when train...
research
06/05/2019

Learning to Rank for Plausible Plausibility

Researchers illustrate improvements in contextual encoding strategies vi...
research
11/30/2021

The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration

In spite of the dominant performances of deep neural networks, recent wo...
research
06/09/2021

From inexact optimization to learning via gradient concentration

Optimization was recently shown to control the inductive bias in a learn...

Please sign up or login with your details

Forgot password? Click here to reset