ConjNLI: Natural Language Inference Over Conjunctive Sentences

10/20/2020
by   Swarnadeep Saha, et al.
15

Reasoning about conjuncts in conjunctive sentences is important for a deeper understanding of conjunctions in English and also how their usages and semantics differ from conjunctive and disjunctive boolean logic. Existing NLI stress tests do not consider non-boolean usages of conjunctions and use templates for testing such model knowledge. Hence, we introduce ConjNLI, a challenge stress-test for natural language inference over conjunctive sentences, where the premise differs from the hypothesis by conjuncts removed, added, or replaced. These sentences contain single and multiple instances of coordinating conjunctions ("and", "or", "but", "nor") with quantifiers, negations, and requiring diverse boolean and non-boolean inferences over conjuncts. We find that large-scale pre-trained language models like RoBERTa do not understand conjunctive semantics well and resort to shallow heuristics to make inferences over such sentences. As some initial solutions, we first present an iterative adversarial fine-tuning method that uses synthetically created training data based on boolean and non-boolean heuristics. We also propose a direct model advancement by making RoBERTa aware of predicate semantic roles. While we observe some performance gains, ConjNLI is still challenging for current methods, thus encouraging interesting future work for better understanding of conjunctions. Our data and code are publicly available at: https://github.com/swarnaHub/ConjNLI

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2018

Stress Test Evaluation for Natural Language Inference

Natural language inference (NLI) is the task of determining if a natural...
research
01/26/2021

Exploring Transitivity in Neural NLI Models through Veridicality

Despite the recent success of deep neural networks in natural language p...
research
06/02/2021

SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics

Recently, deep neural networks (DNNs) have achieved great success in sem...
research
07/02/2022

Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk

Language is the principal tool for human communication, in which humor i...
research
10/30/2018

Stress-Testing Neural Models of Natural Language Inference with Multiply-Quantified Sentences

Standard evaluations of deep learning models for semantics using natural...
research
07/14/2021

Composing Conversational Negation

Negation in natural language does not follow Boolean logic and is theref...

Please sign up or login with your details

Forgot password? Click here to reset