Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

05/22/2023
by   Joe Stacey, et al.
0

State-of-the-art neural models can now reach human performance levels across various natural language understanding tasks. However, despite this impressive performance, models are known to learn from annotation artefacts at the expense of the underlying task. While interpretability methods can identify influential features for each prediction, there are no guarantees that these features are responsible for the model decisions. Instead, we introduce a model-agnostic logical framework to determine the specific information in an input responsible for each model decision. This method creates interpretable Natural Language Inference (NLI) models that maintain their predictive power. We achieve this by generating facts that decompose complex NLI observations into individual logical atoms. Our model makes predictions for each atom and uses logical rules to decide the class of the observation based on the predictions for each atom. We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline. Our method performs best on the most challenging examples, achieving a new state-of-the-art for the ANLI round 3 test set. We outperform every baseline in a reduced-data setting, and despite using no annotations for the generated facts, our model predictions for individual facts align with human expectations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...
research
03/23/2022

AbductionRules: Training Transformers to Explain Unexpected Inputs

Transformers have recently been shown to be capable of reliably performi...
research
04/07/2023

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

Harnessing logical reasoning ability is a comprehensive natural language...
research
04/07/2020

Guessing What's Plausible But Remembering What's True: Accurate Neural Reasoning for Question-Answering

Neural approaches to natural language processing (NLP) often fail at the...
research
05/12/2021

News Headline Grouping as a Challenging NLU Task

Recent progress in Natural Language Understanding (NLU) has seen the lat...
research
05/23/2022

On the Paradox of Learning to Reason from Data

Logical reasoning is needed in a wide range of NLP tasks. Can a BERT mod...
research
08/31/2019

A Logic-Driven Framework for Consistency of Neural Models

While neural models show remarkable accuracy on individual predictions, ...

Please sign up or login with your details

Forgot password? Click here to reset