HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning

04/27/2019
by   Hitomi Yanaka, et al.
0

Large crowdsourced datasets are widely used for training and evaluating neural models on natural language inference (NLI). Despite these efforts, neural models have a hard time capturing logical inferences, including those licensed by phrase replacements, so-called monotonicity reasoning. Since no large dataset has been developed for monotonicity reasoning, it is still unclear whether the main obstacle is the size of datasets or the model architectures themselves. To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena. We add it to training data for the state-of-the-art neural models and evaluate them on test sets for monotonicity phenomena. The results showed that our data augmentation improved the overall accuracy. We also find that the improvement is better on monotonicity inferences with lexical replacements than on downward inferences with disjunction and modification. This suggests that some types of inferences can be improved by our data augmentation while others are immune to it.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

Despite the success of language models using neural networks, it remains...
research
09/30/2020

TaxiNLI: Taking a Ride up the NLU Hill

Pre-trained Transformer-based neural architectures have consistently ach...
research
07/05/2023

SpaceNLI: Evaluating the Consistency of Predicting Inferences in Space

While many natural language inference (NLI) datasets target certain sema...
research
06/15/2019

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...
research
08/19/2018

Lexicosyntactic Inference in Neural Models

We investigate neural models' ability to capture lexicosyntactic inferen...
research
05/23/2022

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...
research
03/01/2022

MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning

Logical reasoning is of vital importance to natural language understandi...

Please sign up or login with your details

Forgot password? Click here to reset