Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation

04/12/2021
by   Chong Zhang, et al.
5

Robustness and counterfactual bias are usually evaluated on a test dataset. However, are these evaluations robust? If the test dataset is perturbed slightly, will the evaluation results keep the same? In this paper, we propose a "double perturbation" framework to uncover model weaknesses beyond the test dataset. The framework first perturbs the test dataset to construct abundant natural sentences similar to the test data, and then diagnoses the prediction change regarding a single-word substitution. We apply this framework to study two perturbation-based approaches that are used to analyze models' robustness and counterfactual bias in English. (1) For robustness, we focus on synonym substitutions and identify vulnerable examples where prediction can be altered. Our proposed attack attains high success rates (96.0 vulnerable examples on both original and robustly trained CNNs and Transformers. (2) For counterfactual bias, we focus on substituting demographic tokens (e.g., gender, race) and measure the shift of the expected prediction among constructed sentences. Our method is able to reveal the hidden model biases not directly shown in the test dataset. Our code is available at https://github.com/chong-z/nlp-second-order-attack.

READ FULL TEXT

page 2

page 7

page 17

page 18

research
10/23/2020

Improving Robustness by Augmenting Training Sentences with Predicate-Argument Structures

Existing NLP datasets contain various biases, and models tend to quickly...
research
02/15/2023

Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation

Distribution shifts are a major source of failure of deployed machine le...
research
05/19/2023

Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis in Four Languages

Sentiment analysis (SA) systems are used in many products and hundreds o...
research
05/03/2022

SemAttack: Natural Textual Attacks via Different Semantic Spaces

Recent studies show that pre-trained language models (LMs) are vulnerabl...
research
07/03/2022

Counterfactually Measuring and Eliminating Social Bias in Vision-Language Pre-training Models

Vision-Language Pre-training (VLP) models have achieved state-of-the-art...
research
09/16/2021

Balancing out Bias: Achieving Fairness Through Training Reweighting

Bias in natural language processing arises primarily from models learnin...
research
05/25/2023

Counterfactual Probing for the Influence of Affect and Specificity on Intergroup Bias

While existing work on studying bias in NLP focues on negative or pejora...

Please sign up or login with your details

Forgot password? Click here to reset