TaxiNLI: Taking a Ride up the NLU Hill

09/30/2020
by   Pratik Joshi, et al.
4

Pre-trained Transformer-based neural architectures have consistently achieved state-of-the-art performance in the Natural Language Inference (NLI) task. Since NLI examples encompass a variety of linguistic, logical, and reasoning phenomena, it remains unclear as to which specific concepts are learnt by the trained systems and where they can achieve strong generalization. To investigate this question, we propose a taxonomic hierarchy of categories that are relevant for the NLI task. We introduce TAXINLI, a new dataset, that has 10k examples from the MNLI dataset (Williams et al., 2018) with these taxonomic labels. Through various experiments on TAXINLI, we observe that whereas for certain taxonomic categories SOTA neural models have achieved near perfect accuracies - a large jump over the previous models - some categories still remain difficult. Our work adds to the growing body of literature that shows the gaps in the current NLI systems and datasets through a systematic presentation and analysis of reasoning categories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2019

HELP: A Dataset for Identifying Shortcomings of Neural Models in Monotonicity Reasoning

Large crowdsourced datasets are widely used for training and evaluating ...
research
05/17/2021

Supporting Context Monotonicity Abstractions in Neural NLI Models

Natural language contexts display logical regularities with respect to s...
research
04/29/2022

Developmental Negation Processing in Transformer Language Models

Reasoning using negation is known to be difficult for transformer-based ...
research
06/15/2019

Can neural networks understand monotonicity reasoning?

Monotonicity reasoning is one of the important reasoning skills for any ...
research
11/19/2015

Reasoning in Vector Space: An Exploratory Study of Question Answering

Question answering tasks have shown remarkable progress with distributed...
research
09/16/2019

Probing Natural Language Inference Models through Semantic Fragments

Do state-of-the-art models for language understanding already have, or c...
research
06/12/2021

Can Transformer Language Models Predict Psychometric Properties?

Transformer-based language models (LMs) continue to advance state-of-the...

Please sign up or login with your details

Forgot password? Click here to reset