Breaking NLI Systems with Sentences that Require Simple Lexical Inferences

05/06/2018
by   Max Glockner, et al.
0

We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge. The new examples are simpler than the SNLI test set, containing sentences that differ by at most one word from sentences in the training set. Yet, the performance on the new test set is substantially worse across systems trained on SNLI, demonstrating that these systems are limited in their generalization ability, failing to capture many simple inferences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

Despite the success of language models using neural networks, it remains...
research
10/13/2022

Benchmarking Long-tail Generalization with Likelihood Splits

In order to reliably process natural language, NLP systems must generali...
research
06/15/2019

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...
research
09/07/2018

Simple coarse graining and sampling strategies for image recognition

A conceptually simple way to recognize images is to directly compare tes...
research
08/31/2017

Learning Lexico-Functional Patterns for First-Person Affect

Informal first-person narratives are a unique resource for computational...
research
09/14/2021

NOPE: A Corpus of Naturally-Occurring Presuppositions in English

Understanding language requires grasping not only the overtly stated con...
research
08/15/2021

Exploring Generalization Ability of Pretrained Language Models on Arithmetic and Logical Reasoning

To quantitatively and intuitively explore the generalization ability of ...

Please sign up or login with your details

Forgot password? Click here to reset