Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions

05/14/2020
by   Xiaochuang Han, et al.
0

Modern deep learning models for NLP are notoriously opaque. This has motivated the development of methods for interpreting such models, e.g., via gradient-based saliency maps or the visualization of attention weights. Such approaches aim to provide explanations for a particular model prediction by highlighting important words in the corresponding input text. While this might be useful for tasks where decisions are explicitly influenced by individual tokens in the input, we suspect that such highlighting is not suitable for tasks where model decisions should be driven by more complex reasoning. In this work, we investigate the use of influence functions for NLP, providing an alternative approach to interpreting neural text classifiers. Influence functions explain the decisions of a model by identifying influential training examples. Despite the promise of this approach, influence functions have not yet been extensively evaluated in the context of NLP, a gap addressed by this work. We conduct a comparison between influence functions and common word-saliency methods on representative tasks. As suspected, we find that influence functions are particularly useful for natural language inference, a task in which 'saliency maps' may not have clear interpretation. Furthermore, we develop a new quantitative measure based on influence functions that can reveal artifacts in training data.

READ FULL TEXT
research
10/13/2022

Constructing Natural Language Explanations via Saliency Map Verbalization

Saliency maps can explain a neural model's prediction by identifying imp...
research
10/12/2020

Gradient-based Analysis of NLP Models is Manipulable

Gradient-based analysis methods, such as saliency map visualizations and...
research
08/12/2018

Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference

Deep learning models have achieved remarkable success in natural languag...
research
09/19/2019

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

Neural NLP models are increasingly accurate but are imperfect and opaque...
research
05/26/2023

Theoretical and Practical Perspectives on what Influence Functions Do

Influence functions (IF) have been seen as a technique for explaining mo...
research
10/07/2021

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

Among the most critical limitations of deep learning NLP models are thei...
research
08/13/2022

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

Recently, there has been growing interest in the ability of Transformer-...

Please sign up or login with your details

Forgot password? Click here to reset