Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

10/04/2022
by   Sravanti Addepalli, et al.
5

Deep Neural Networks are known to be brittle to even minor distribution shifts compared to the training distribution. While one line of work has demonstrated that Simplicity Bias (SB) of DNNs - bias towards learning only the simplest features - is a key reason for this brittleness, another recent line of work has surprisingly found that diverse/ complex features are indeed learned by the backbone, and their brittleness is due to the linear classification head relying primarily on the simplest features. To bridge the gap between these two lines of work, we first hypothesize and verify that while SB may not altogether preclude learning complex features, it amplifies simpler features over complex ones. Namely, simple features are replicated several times in the learned representations while complex features might not be replicated. This phenomenon, we term Feature Replication Hypothesis, coupled with the Implicit Bias of SGD to converge to maximum margin solutions in the feature space, leads the models to rely mostly on the simple features for classification. To mitigate this bias, we propose Feature Reconstruction Regularizer (FRR) to ensure that the learned features can be reconstructed back from the logits. The use of FRR in linear layer training (FRR-L) encourages the use of more diverse features for classification. We further propose to finetune the full network by freezing the weights of the linear layer trained using FRR-L, to refine the learned features, making them more suitable for classification. Using this simple solution, we demonstrate up to 15 with extreme distribution shifts. Moreover, we demonstrate noteworthy gains over existing SOTA methods on the standard OOD benchmark DomainBed as well.

READ FULL TEXT

page 5

page 13

research
06/13/2020

The Pitfalls of Simplicity Bias in Neural Networks

Several works have proposed Simplicity Bias (SB)—the tendency of standar...
research
05/30/2023

Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias

Neural networks trained with (stochastic) gradient descent have an induc...
research
11/21/2022

Neural networks trained with SGD learn distributions of increasing complexity

The ability of deep neural networks to generalise well even when they in...
research
01/30/2023

An adversarial feature learning strategy for debiasing neural networks

Simplicity bias is the concerning tendency of deep networks to over-depe...
research
02/07/2023

Delving Deep into Simplicity Bias for Long-Tailed Image Recognition

Simplicity Bias (SB) is a phenomenon that deep neural networks tend to r...
research
02/01/2023

Simplicity Bias in 1-Hidden Layer Neural Networks

Recent works have demonstrated that neural networks exhibit extreme simp...
research
12/13/2022

Simplicity Bias Leads to Amplified Performance Disparities

The simple idea that not all things are equally difficult has surprising...

Please sign up or login with your details

Forgot password? Click here to reset