Outlier-Aware Training for Improving Group Accuracy Disparities

10/27/2022
by   Li-Kuang Chen, et al.
2

Methods addressing spurious correlations such as Just Train Twice (JTT, arXiv:2107.09044v2) involve reweighting a subset of the training set to maximize the worst-group accuracy. However, the reweighted set of examples may potentially contain unlearnable examples that hamper the model's learning. We propose mitigating this by detecting outliers to the training set and removing them before reweighting. Our experiments show that our method achieves competitive or better accuracy compared with JTT and can detect and remove annotation errors in the subset being reweighted in JTT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2018

Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

Deep neural networks are vulnerable to adversarial examples. Prior defen...
research
12/20/2022

Careful Data Curation Stabilizes In-context Learning

In-context learning (ICL) enables large language models (LLMs) to perfor...
research
12/12/2022

You Only Need a Good Embeddings Extractor to Fix Spurious Correlations

Spurious correlations in training data often lead to robustness issues s...
research
05/22/2023

Relabel Minimal Training Subset to Flip a Prediction

Yang et al. (2023) discovered that removing a mere 1 often lead to the f...
research
11/25/2013

Are all training examples equally valuable?

When learning a new concept, not all training examples may prove equally...
research
07/14/2011

Label-Specific Training Set Construction from Web Resource for Image Annotation

Recently many research efforts have been devoted to image annotation by ...
research
10/27/2021

Simple data balancing achieves competitive worst-group-accuracy

We study the problem of learning classifiers that perform well across (k...

Please sign up or login with your details

Forgot password? Click here to reset