On Guaranteed Optimal Robust Explanations for NLP Models

05/08/2021
by   Emanuele La Malfa, et al.
4

We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the in-put text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be con-figured with different perturbation sets in the em-bedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to100words from SST, Twitter and IMDB datasets,demonstrating the effectiveness of the derived explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2022

Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments

As the applications of Natural Language Processing (NLP) in sensitive ar...
research
04/19/2022

A survey on improving NLP models with human explanations

Training a model with access to human explanations can improve data effi...
research
03/21/2023

Efficiently Explaining CSPs with Unsatisfiable Subset Optimization (extended algorithms and examples)

We build on a recently proposed method for stepwise explaining solutions...
research
06/09/2021

On Sample Based Explanation Methods for NLP:Efficiency, Faithfulness, and Semantic Evaluation

In the recent advances of natural language processing, the scale of the ...
research
04/09/2021

Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Explaining neural network models is important for increasing their trust...
research
06/22/2021

On the Diversity and Limits of Human Explanations

A growing effort in NLP aims to build datasets of human explanations. Ho...
research
09/08/2022

ReX: A Framework for Generating Local Explanations to Recurrent Neural Networks

We propose a general framework to adapt various local explanation techni...

Please sign up or login with your details

Forgot password? Click here to reset