Generating Token-Level Explanations for Natural Language Inference

04/24/2019
by   James Thorne, et al.
0

The task of Natural Language Inference (NLI) is widely modeled as supervised sentence pair classification. While there has been a lot of work recently on generating explanations of the predictions of classifiers on a single piece of text, there have been no attempts to generate explanations of classifiers operating on pairs of sentences. In this paper, we show that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose. We use a simple LSTM architecture and evaluate both LIME and Anchor explanations for this task. We compare these to a Multiple Instance Learning (MIL) method that uses thresholded attention make token-level predictions. The approach we present in this paper is a novel extension of zero-shot single-sentence tagging to sentence pairs for NLI. We conduct our experiments on the well-studied SNLI dataset that was recently augmented with manually annotation of the tokens that explain the entailment relation. We find that our white-box MIL-based method, while orders of magnitude faster, does not reach the same accuracy as the black-box methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2022

Chunk-aware Alignment and Lexical Constraint for Visual Entailment with Natural Language Explanations

Visual Entailment with natural language explanations aims to infer the r...
research
06/04/2016

Generating Natural Language Inference Chains

The ability to reason with natural language is a fundamental prerequisit...
research
12/04/2018

e-SNLI: Natural Language Inference withNatural Language Explanations

In order for machine learning to garner widespread public adoption, mode...
research
02/21/2023

Parallel Sentence-Level Explanation Generation for Real-World Low-Resource Scenarios

In order to reveal the rationale behind model predictions, many works ha...
research
05/06/2018

Zero-shot Sequence Labeling: Transferring Knowledge from Sentences to Tokens

Can attention- or gradient-based visualization techniques be used to inf...
research
10/27/2020

Interpretation of NLP models through input marginalization

To demystify the "black box" property of deep neural networks for natura...
research
12/23/2021

More Than Words: Towards Better Quality Interpretations of Text Classifiers

The large size and complex decision mechanisms of state-of-the-art text ...

Please sign up or login with your details

Forgot password? Click here to reset