Investigating Biases in Textual Entailment Datasets

06/23/2019
by   Shawn Tan, et al.
0

The ability to understand logical relationships between sentences is an important task in language understanding. To aid in progress for this task, researchers have collected datasets for machine learning and evaluation of current systems. However, like in the crowdsourced Visual Question Answering (VQA) task, some biases in the data inevitably occur. In our experiments, we find that performing classification on just the hypotheses on the SNLI dataset yields an accuracy of 64 MultiNLI dataset, discuss its implication, and propose a simple method to reduce the biases in the datasets.

READ FULL TEXT
research
11/26/2018

Visual Entailment Task for Visually-Grounded Language Learning

We introduce a new inference task - Visual Entailment (VE) - which diffe...
research
01/20/2019

Visual Entailment: A Novel Task for Fine-Grained Image Understanding

Existing visual reasoning datasets such as Visual Question Answering (VQ...
research
11/19/2018

Explicit Bias Discovery in Visual Question Answering Models

Researchers have observed that Visual Question Answering (VQA) models te...
research
11/07/2020

Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles

Many datasets have been shown to contain incidental correlations created...
research
09/09/2019

Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases

State-of-the-art models often make use of superficial patterns in the da...
research
06/18/2018

Comparative Analysis of Neural QA models on SQuAD

The task of Question Answering has gained prominence in the past few dec...
research
04/08/2021

How Transferable are Reasoning Patterns in VQA?

Since its inception, Visual Question Answering (VQA) is notoriously know...

Please sign up or login with your details

Forgot password? Click here to reset