Invariant Learning via Diffusion Dreamed Distribution Shifts

11/18/2022
by   Priyatham Kattakinda, et al.
0

Though the background is an important signal for image classification, over reliance on it can lead to incorrect predictions when spurious correlations between foreground and background are broken at test time. Training on a dataset where these correlations are unbiased would lead to more robust models. In this paper, we propose such a dataset called Diffusion Dreamed Distribution Shifts (D3S). D3S consists of synthetic images generated through StableDiffusion using text prompts and image guides obtained by pasting a sample foreground image onto a background template image. Using this scalable approach we generate 120K images of objects from all 1000 ImageNet classes in 10 diverse backgrounds. Due to the incredible photorealism of the diffusion model, our images are much closer to natural images than previous synthetic datasets. D3S contains a validation set of more than 17K images whose labels are human-verified in an MTurk study. Using the validation set, we evaluate several popular DNN image classifiers and find that the classification performance of models generally suffers on our background diverse images. Next, we leverage the foreground background labels in D3S to learn a foreground (background) representation that is invariant to changes in background (foreground) by penalizing the mutual information between the foreground (background) features and the background (foreground) labels. Linear classifiers trained on these features to predict foreground (background) from foreground (background) have high accuracies at 82.9 classifiers that predict these labels from background and foreground have a much lower accuracy of 2.4 foreground and background features are well disentangled. We further test the efficacy of these representations by training classifiers on a task with strong spurious correlations.

READ FULL TEXT

page 1

page 4

page 5

page 12

page 13

page 14

page 17

page 18

research
08/19/2023

ASPIRE: Language-Guided Augmentation for Robust Image Classification

Neural image classifiers can often learn to make predictions by overly r...
research
06/02/2023

Evaluating The Robustness of Self-Supervised Representations to Background/Foreground Removal

Despite impressive empirical advances of SSL in solving various tasks, t...
research
07/04/2023

Mitigating Bias: Enhancing Image Classification by Improving Model Explanations

Deep learning models have demonstrated remarkable capabilities in learni...
research
06/17/2020

Noise or Signal: The Role of Image Backgrounds in Object Recognition

We assess the tendency of state-of-the-art object recognition models to ...
research
06/05/2020

Scene Image Representation by Foreground, Background and Hybrid Features

Previous methods for representing scene images based on deep learning pr...
research
08/10/2019

Unconstrained Foreground Object Search

Many people search for foreground objects to use when editing images. Wh...
research
06/02/2022

Optimizing Relevance Maps of Vision Transformers Improves Robustness

It has been observed that visual classification models often rely mostly...

Please sign up or login with your details

Forgot password? Click here to reset