MaskTune: Mitigating Spurious Correlations by Forcing to Explore

09/30/2022
by   Saeid Asgari Taghanaki, et al.
6

A fundamental challenge of over-parameterized deep learning models is learning meaningful data representations that yield good performance on a downstream task without over-fitting spurious input features. This work proposes MaskTune, a masking strategy that prevents over-reliance on spurious (or a limited number of) features. MaskTune forces the trained model to explore new features during a single epoch finetuning by masking previously discovered features. MaskTune, unlike earlier approaches for mitigating shortcut learning, does not require any supervision, such as annotating spurious features or labels for subgroup samples in a dataset. Our empirical results on biased MNIST, CelebA, Waterbirds, and ImagenNet-9L datasets show that MaskTune is effective on tasks that often suffer from the existence of spurious correlations. Finally, we show that MaskTune outperforms or achieves similar performance to the competing methods when applied to the selective classification (classification with rejection option) task. Code for MaskTune is available at https://github.com/aliasgharkhani/Masktune.

READ FULL TEXT

page 6

page 7

research
06/21/2023

Towards Mitigating Spurious Correlations in the Wild: A Benchmark a more Realistic Dataset

Deep neural networks often exploit non-predictive features that are spur...
research
12/02/2022

Avoiding spurious correlations via logit correction

Empirical studies suggest that machine learning models trained with empi...
research
12/28/2022

Swin MAE: Masked Autoencoders for Small Datasets

The development of deep learning models in medical image analysis is maj...
research
05/01/2023

Discover and Cure: Concept-aware Mitigation of Spurious Correlation

Deep neural networks often rely on spurious correlations to make predict...
research
03/25/2021

Universal Representation Learning from Multiple Domains for Few-shot Classification

In this paper, we look at the problem of few-shot classification that ai...
research
10/01/2019

Entropy Penalty: Towards Generalization Beyond the IID Assumption

It has been shown that instead of learning actual object features, deep ...
research
12/09/2022

A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

Machine learning models have been found to learn shortcuts – unintended ...

Please sign up or login with your details

Forgot password? Click here to reset