DeepAI AI Chat
Log In Sign Up

HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning

by   Hitomi Yanaka, et al.
University of Groningen
Tohoku University
Ochanomizu University

Large crowdsourced datasets are widely used for training and evaluating neural models on natural language inference (NLI). Despite these efforts, neural models have a hard time capturing logical inferences, including those licensed by phrase replacements, so-called monotonicity reasoning. Since no large dataset has been developed for monotonicity reasoning, it is still unclear whether the main obstacle is the size of datasets or the model architectures themselves. To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena. We add it to training data for the state-of-the-art neural models and evaluate them on test sets for monotonicity phenomena. The results showed that our data augmentation improved the overall accuracy. We also find that the improvement is better on monotonicity inferences with lexical replacements than on downward inferences with disjunction and modification. This suggests that some types of inferences can be improved by our data augmentation while others are immune to it.


page 1

page 2

page 3

page 4


Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

Despite the success of language models using neural networks, it remains...

TaxiNLI: Taking a Ride up the NLU Hill

Pre-trained Transformer-based neural architectures have consistently ach...

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...

Lexicosyntactic Inference in Neural Models

We investigate neural models' ability to capture lexicosyntactic inferen...

MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning

Logical reasoning is of vital importance to natural language understandi...

Decomposing Natural Logic Inferences in Neural NLI

In the interest of interpreting neural NLI models and their reasoning st...