Better May Not Be Fairer: Can Data Augmentation Mitigate Subgroup Degradation?

12/16/2022
by   Ming-Chang Chiu, et al.
0

It is no secret that deep learning models exhibit undesirable behaviors such as learning spurious correlations instead of learning correct relationships between input/output pairs. Prior works on robustness study datasets that mix low-level features to quantify how spurious correlations affect predictions instead of considering natural semantic factors due to limitations in accessing realistic datasets for comprehensive evaluation. To bridge this gap, in this paper we first investigate how natural background colors play a role as spurious features in image classification tasks by manually splitting the test sets of CIFAR10 and CIFAR100 into subgroups based on the background color of each image. We name our datasets CIFAR10-B and CIFAR100-B. We find that while standard CNNs achieve human-level accuracy, the subgroup performances are not consistent, and the phenomenon remains even after data augmentation (DA). To alleviate this issue, we propose FlowAug, a semantic DA method that leverages the decoupled semantic representations captured by a pre-trained generative flow. Experimental results show that FlowAug achieves more consistent results across subgroups than other types of DA methods on CIFAR10 and CIFAR100. Additionally, it shows better generalization performance. Furthermore, we propose a generic metric for studying model robustness to spurious correlations, where we take a macro average on the weighted standard deviations across different classes. Per our metric, FlowAug demonstrates less reliance on spurious correlations. Although this metric is proposed to study our curated datasets, it applies to all datasets that have subgroups or subclasses. Lastly, aside from less dependence on spurious correlations and better generalization on in-distribution test sets, we also show superior out-of-distribution results on CIFAR10.1 and competitive performances on CIFAR10-C and CIFAR100-C.

READ FULL TEXT

page 2

page 3

page 4

page 7

page 11

research
04/07/2022

The Effects of Regularization and Data Augmentation are Class Dependent

Regularization is a fundamental technique to prevent over-fitting and to...
research
06/28/2021

Data augmentation for deep learning based accelerated MRI reconstruction with limited data

Deep neural networks have emerged as very successful tools for image res...
research
02/16/2022

A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification From Analytical Augmented Sample Moments

Data-Augmentation (DA) is known to improve performance across tasks and ...
research
10/04/2022

Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation

There exist features that are related to the label in the same way acros...
research
03/08/2021

Consistency Regularization for Adversarial Robustness

Adversarial training (AT) is currently one of the most successful method...
research
03/15/2022

Adversarial Counterfactual Augmentation: Application in Alzheimer's Disease Classification

Data augmentation has been widely used in deep learning to reduce over-f...
research
03/16/2022

Structurally Diverse Sampling Reduces Spurious Correlations in Semantic Parsing Datasets

A rapidly growing body of research has demonstrated the inability of NLP...

Please sign up or login with your details

Forgot password? Click here to reset