Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases

05/30/2023
by   Yuval Reif, et al.
0

NLP models often rely on superficial cues known as dataset biases to achieve impressive performance, and can fail on examples where these biases do not hold. Recent work sought to develop robust, unbiased models by filtering biased examples from training sets. In this work, we argue that such filtering can obscure the true capabilities of models to overcome biases, which might never be removed in full from the dataset. We suggest that in order to drive the development of models robust to subtle biases, dataset biases should be amplified in the training set. We introduce an evaluation framework defined by a bias-amplified training set and an anti-biased test set, both automatically extracted from existing datasets. Experiments across three notions of bias, four datasets and two models show that our framework is substantially more challenging for models than the original data splits, and even more challenging than hand-crafted challenge sets. Our evaluation framework can use any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations. To this end, we publicly release our code and data.

READ FULL TEXT
research
09/25/2020

Towards Debiasing NLU Models from Unknown Biases

NLU models often exploit biases to achieve high dataset-specific perform...
research
10/07/2019

Learning De-biased Representations with Biased Representations

Many machine learning algorithms are trained and evaluated by splitting ...
research
04/28/2022

Learning to Split for Automatic Bias Detection

Classifiers are biased when trained on biased datasets. As a remedy, we ...
research
04/17/2021

Competency Problems: On Finding and Removing Artifacts in Language Data

Much recent work in NLP has documented dataset artifacts, bias, and spur...
research
07/24/2019

WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale

The Winograd Schema Challenge (WSC), proposed by Levesque et al. (2011) ...
research
05/22/2023

Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models

Auditing unwanted social bias in language models (LMs) is inherently har...
research
09/01/2021

Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques

Existing techniques for mitigating dataset bias often leverage a biased ...

Please sign up or login with your details

Forgot password? Click here to reset