NLVR2 Visual Bias Analysis

by   Alane Suhr, et al.
cornell university

NLVR2 (Suhr et al., 2019) was designed to be robust for language bias through a data collection process that resulted in each natural language sentence appearing with both true and false labels. The process did not provide a similar measure of control for visual bias. This technical report analyzes the potential for visual bias in NLVR2. We show that some amount of visual bias likely exists. Finally, we identify a subset of the test data that allows to test for model performance in a way that is robust to such potential biases. We show that the performance of existing models (Li et al., 2019; Tan and Bansal 2019) is relatively robust to this potential bias. We propose to add the evaluation on this subset of the data to the NLVR2 evaluation protocol, and update the official release to include it. A notebook including an implementation of the code used to replicate this analysis is available at


page 2

page 6


Exploring Lexical Irregularities in Hypothesis-Only Models of Natural Language Inference

Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE)...

Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

Advanced machine learning techniques have boosted the performance of nat...

Stereotyping and Bias in the Flickr30K Dataset

An untested assumption behind the crowdsourced descriptions of the image...

Sample Selection Bias in Evaluation of Prediction Performance of Causal Models

Causal models are notoriously difficult to validate because they make un...

Counterfactual Probing for the Influence of Affect and Specificity on Intergroup Bias

While existing work on studying bias in NLP focues on negative or pejora...

More declarative tabling in Prolog using multi-prompt delimited control

Several Prolog implementations include a facility for tabling, an altern...

Please sign up or login with your details

Forgot password? Click here to reset