Probing Natural Language Inference Models through Semantic Fragments

09/16/2019
by   Kyle Richardson, et al.
0

Do state-of-the-art models for language understanding already have, or can they easily learn, abilities such as boolean coordination, quantification, conditionals, comparatives, and monotonicity reasoning (i.e., reasoning about word substitutions in sentential contexts)? While such phenomena are involved in natural language inference (NLI) and go beyond basic linguistic understanding, it is unclear the extent to which they are captured in existing NLI benchmarks and effectively learned by models. To investigate this, we propose the use of semantic fragments---systematically generated datasets that each target a different semantic phenomenon---for probing, and efficiently improving, such capabilities of linguistic models. This approach to creating challenge datasets allows direct control over the semantic diversity and complexity of the targeted linguistic phenomena, and results in a more precise characterization of a model's linguistic behavior. Our experiments, using a library of 8 such semantic fragments, reveal two remarkable findings: (a) State-of-the-art models, including BERT, that are pre-trained on existing NLI benchmark datasets perform poorly on these new fragments, even though the phenomena probed here are central to the NLI task. (b) On the other hand, with only a few minutes of additional fine-tuning---with a carefully selected learning rate and a novel variation of "inoculation"---a BERT-based model can master all of these logic and monotonicity fragments while retaining its performance on established NLI benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2022

Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding

In the age of large transformer language models, linguistic evaluation p...
research
06/24/2022

Unified BERT for Few-shot Natural Language Understanding

Even as pre-trained language models share a semantic encoder, natural la...
research
08/26/2019

Does BERT agree? Evaluating knowledge of structure dependence through agreement relations

Learning representations that accurately model semantics is an important...
research
01/27/2023

Can We Use Probing to Better Understand Fine-tuning and Knowledge Distillation of the BERT NLU?

In this article, we use probing to investigate phenomena that occur duri...
research
12/15/2021

Decomposing Natural Logic Inferences in Neural NLI

In the interest of interpreting neural NLI models and their reasoning st...
research
05/17/2021

Supporting Context Monotonicity Abstractions in Neural NLI Models

Natural language contexts display logical regularities with respect to s...
research
09/30/2020

TaxiNLI: Taking a Ride up the NLU Hill

Pre-trained Transformer-based neural architectures have consistently ach...

Please sign up or login with your details

Forgot password? Click here to reset