Improving Generalization by Incorporating Coverage in Natural Language Inference

09/19/2019
by   Nafise Sadat Moosavi, et al.
0

The task of natural language inference (NLI) is to identify the relation between the given premise and hypothesis. While recent NLI models achieve very high performance on individual datasets, they fail to generalize across similar datasets. This indicates that they are solving NLI datasets instead of the task itself. In order to improve generalization, we propose to extend the input representations with an abstract view of the relation between the hypothesis and the premise, i.e., how well the individual words, or word n-grams, of the hypothesis are covered by the premise. Our experiments show that the use of this information considerably improves generalization across different NLI datasets without requiring any external knowledge or additional data. Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task. The resulting generalization improves the performance across datasets that belong to similar but not the same tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2019

Posing Fair Generalization Tasks for Natural Language Inference

Deep learning models for semantics are generally evaluated using natural...
research
10/23/2018

Neural Network Models for Natural Language Inference Fail to Capture the Semantics of Inference

Neural network models have been very successful for natural language inf...
research
05/11/2020

On the Transferability of Winning Tickets in Non-Natural Image Datasets

We study the generalization properties of pruned neural networks that ar...
research
10/23/2018

Testing the Generalization Power of Neural Network Models Across NLI Benchmarks

Neural network models have been very successful for natural language inf...
research
09/15/2018

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

Natural Language Inference (NLI) is fundamental to many Natural Language...
research
04/16/2020

There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

Natural Language Inference (NLI) datasets contain annotation artefacts r...
research
05/14/2019

Misleading Failures of Partial-input Baselines

Recent work establishes dataset difficulty and removes annotation artifa...

Please sign up or login with your details

Forgot password? Click here to reset