LIREx: Augmenting Language Inference with Relevant Explanation

12/16/2020
by   Xinyan Zhao, et al.
0

Natural language explanations (NLEs) are a special form of data annotation in which annotators identify rationales (most significant text tokens) when assigning labels to data instances, and write out explanations for the labels in natural language based on the rationales. NLEs have been shown to capture human reasoning better, but not as beneficial for natural language inference (NLI). In this paper, we analyze two primary flaws in the way NLEs are currently used to train explanation generators for language inference tasks. We find that the explanation generators do not take into account the variability inherent in human explanation of labels, and that the current explanation generation models generate spurious explanations. To overcome these limitations, we propose a novel framework, LIREx, that incorporates both a rationale-enabled explanation generator and an instance selector to select only relevant, plausible NLEs to augment NLI models. When evaluated on the standardized SNLI data set, LIREx achieved an accuracy of 91.87 improvement of 0.32 over the baseline and matching the best-reported performance on the data set. It also achieves significantly better performance than previous studies when transferred to the out-of-domain MultiNLI data set. Qualitative analysis shows that LIREx generates flexible, faithful, and relevant NLEs that allow the model to be more robust to spurious explanations. The code is available at https://github.com/zhaoxy92/LIREx.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2020

NILE : Natural Language Inference with Faithful Natural Language Explanations

The recent growth in the popularity and success of deep learning models ...
research
04/18/2021

Improving Neural Model Performance through Natural Language Feedback on Their Explanations

A class of explainable NLP models for reasoning tasks support their deci...
research
09/02/2022

INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations

XAI with natural language processing aims to produce human-readable expl...
research
05/10/2018

Training Classifiers with Natural Language Explanations

Training accurate classifiers requires many labels, but each label provi...
research
06/04/2019

Learning to Explain: Answering Why-Questions via Rephrasing

Providing plausible responses to why questions is a challenging but crit...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
05/03/2022

Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI

A fundamental research goal for Explainable AI (XAI) is to build models ...

Please sign up or login with your details

Forgot password? Click here to reset