LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI

12/04/2021
by   Ishan Tarunesh, et al.
0

Natural Language Inference (NLI) is considered a representative task to test natural language understanding (NLU). In this work, we propose an extensible framework to collectively yet categorically test diverse Logical reasoning capabilities required for NLI (and by extension, NLU). Motivated by behavioral testing, we create a semi-synthetic large test-bench (363 templates, 363k examples) and an associated framework that offers following utilities: 1) individually test and analyze reasoning capabilities along 17 reasoning dimensions (including pragmatic reasoning), 2) design experiments to study cross-capability information content (leave one out or bring one in); and 3) the synthetic nature enable us to control for artifacts and biases. The inherited power of automated test case instantiation from free-form natural language templates (using CheckList), and a well-defined taxonomy of capabilities enable us to extend to (cognitively) harder test cases while varying the complexity of natural language. Through our analysis of state-of-the-art NLI systems, we observe that our benchmark is indeed hard (and non-trivial even with training on additional resources). Some capabilities stand out as harder. Further fine-grained analysis and fine-tuning experiments reveal more insights about these capabilities and the models – supporting and extending previous observations. Towards the end we also perform an user-study, to investigate whether behavioral information can be utilised to generalize much better for some models compared to others.

READ FULL TEXT
research
07/15/2021

Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task

The recent state-of-the-art natural language understanding (NLU) systems...
research
02/09/2021

Statistically Profiling Biases in Natural Language Reasoning Datasets and Models

Recent work has indicated that many natural language understanding and r...
research
04/07/2023

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

Harnessing logical reasoning ability is a comprehensive natural language...
research
06/18/2019

Hyperintensional Reasoning based on Natural Language Knowledge Base

The success of automated reasoning techniques over large natural-languag...
research
09/30/2020

Measuring Systematic Generalization in Neural Proof Generation with Transformers

We are interested in understanding how well Transformer language models ...
research
11/10/2022

Can Transformers Reason in Fragments of Natural Language?

State-of-the-art deep-learning-based approaches to Natural Language Proc...

Please sign up or login with your details

Forgot password? Click here to reset