DeepAI
Log In Sign Up

On Guaranteed Optimal Robust Explanations for NLP Models

05/08/2021
by   Emanuele La Malfa, et al.
4

We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the in-put text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be con-figured with different perturbation sets in the em-bedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to100words from SST, Twitter and IMDB datasets,demonstrating the effectiveness of the derived explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/25/2022

Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments

As the applications of Natural Language Processing (NLP) in sensitive ar...
04/19/2022

A survey on improving NLP models with human explanations

Training a model with access to human explanations can improve data effi...
06/09/2021

On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation

In the recent advances of natural language processing, the scale of the ...
04/09/2021

Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Explaining neural network models is important for increasing their trust...
01/28/2021

Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling

Deep neural networks are powerful statistical learners. However, their p...
09/08/2022

ReX: A Framework for Generating Local Explanations to Recurrent Neural Networks

We propose a general framework to adapt various local explanation techni...
03/27/2021

Explaining the Road Not Taken

It is unclear if existing interpretations of deep neural network models ...