Learning to Faithfully Rationalize by Construction

04/30/2020
by   Sarthak Jain, et al.
2

In many settings it is important for one to be able to understand why a model made a particular prediction. In NLP this often entails extracting snippets of an input text `responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation. In some settings, faithfulness may be critical to ensure transparency. Lei et al. (2016) proposed a model to produce faithful rationales for neural text classification by defining independent snippet extraction and prediction modules. However, the discrete selection over input tokens performed by this method complicates training, leading to high variance and requiring careful hyperparameter tuning. We propose a simpler variant of this approach that provides faithful explanations by construction. In our scheme, named FRESH, arbitrary feature importance scores (e.g., gradients from a trained model) are used to induce binary labels over token inputs, which an extractor can be trained to predict. An independent classifier module is then trained exclusively on snippets provided by the extractor; these snippets thus constitute faithful explanations, even if the classifier is arbitrarily complex. In both automatic and manual evaluations we find that variants of this simple framework yield predictive performance superior to `end-to-end' approaches, while being more general and easier to train. Code is available at https://github.com/successar/FRESH

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2018

Rule induction for global explanation of trained models

Understanding the behavior of a trained network and finding explanations...
research
02/26/2019

Attention is not Explanation

Attention mechanisms have seen wide adoption in neural NLP models. In ad...
research
06/01/2021

Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals

Feature importance (FI) estimates are a popular form of explanation, and...
research
06/15/2021

SSMix: Saliency-Based Span Mixup for Text Classification

Data augmentation with mixup has shown to be effective on various comput...
research
05/20/2018

Abstractive Text Classification Using Sequence-to-convolution Neural Networks

We propose a new deep neural network model and its training scheme for t...
research
04/16/2021

Variable Instance-Level Explainability for Text Classification

Despite the high accuracy of pretrained transformer networks in text cla...
research
10/14/2021

The Irrationality of Neural Rationale Models

Neural rationale models are popular for interpretable predictions of NLP...

Please sign up or login with your details

Forgot password? Click here to reset