Discover and Cure: Concept-aware Mitigation of Spurious Correlation

05/01/2023
by   Shirley Wu, et al.
0

Deep neural networks often rely on spurious correlations to make predictions, which hinders generalization beyond training environments. For instance, models that associate cats with bed backgrounds can fail to predict the existence of cats in other environments without beds. Mitigating spurious correlations is crucial in building trustworthy models. However, the existing works lack transparency to offer insights into the mitigation process. In this work, we propose an interpretable framework, Discover and Cure (DISC), to tackle the issue. With human-interpretable concepts, DISC iteratively 1) discovers unstable concepts across different environments as spurious attributes, then 2) intervenes on the training data using the discovered concepts to reduce spurious correlation. Across systematic experiments, DISC provides superior generalization ability and interpretability than the existing approaches. Specifically, it outperforms the state-of-the-art methods on an object recognition task and a skin-lesion classification task by 7.5 respectively. Additionally, we offer theoretical analysis and guarantees to understand the benefits of models trained by DISC. Code and data are available at https://github.com/Wuyxin/DISC.

READ FULL TEXT

page 4

page 7

page 17

page 18

page 19

page 21

research
04/04/2023

EPVT: Environment-aware Prompt Vision Transformer for Domain Generalization in Skin Lesion Recognition

Skin lesion recognition using deep learning has made remarkable progress...
research
06/02/2023

Probabilistic Concept Bottleneck Models

Interpretable models are designed to make decisions in a human-interpret...
research
05/26/2021

Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers

We propose Predict then Interpolate (PI), a simple algorithm for learnin...
research
09/30/2022

MaskTune: Mitigating Spurious Correlations by Forcing to Explore

A fundamental challenge of over-parameterized deep learning models is le...
research
01/30/2022

Discovering Invariant Rationales for Graph Neural Networks

Intrinsic interpretability of graph neural networks (GNNs) is to find a ...
research
04/19/2023

Investigating the Nature of 3D Generalization in Deep Neural Networks

Visual object recognition systems need to generalize from a set of 2D tr...
research
04/12/2023

Label-Free Concept Bottleneck Models

Concept bottleneck models (CBM) are a popular way of creating more inter...

Please sign up or login with your details

Forgot password? Click here to reset