Evaluating neural network explanation methods using hybrid documents and morphological prediction

01/19/2018
by   Nina Poerner, et al.
0

We propose two novel paradigms for evaluating neural network explanations in NLP. The first paradigm works on hybrid documents, the second exploits morphosyntactic agreements. Neither paradigm requires manual annotations; instead, a relevance ground truth is generated automatically. In our experiments, successful explanations for Long Short Term Memory networks (LSTMs) were produced by a decomposition of memory cells (Murdoch & Szlam, 2017), while for convolutional neural networks, a gradient-based method by (Denil et al., 2014) works well. We also introduce LIMSSE, a substring-based extension of LIME (Ribeiro et al., 2016) that produces the most successful explanations in the hybrid document experiment.

READ FULL TEXT

page 12

page 13

page 14

page 15

page 16

page 17

research
08/28/2018

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?

Character-level features are currently used in different neural network-...
research
03/01/2019

Aggregating explainability methods for neural networks stabilizes explanations

Despite a growing literature on explaining neural networks, no consensus...
research
11/06/2017

Learning Solving Procedure for Artificial Neural Network

It is expected that progress toward true artificial intelligence will be...
research
09/19/2019

Highlighting Bias with Explainable Neural-Symbolic Visual Reasoning

Many high-performance models suffer from a lack of interpretability. The...
research
09/11/2018

Response Characterization for Auditing Cell Dynamics in Long Short-term Memory Networks

In this paper, we introduce a novel method to interpret recurrent neural...
research
03/16/2020

Towards Ground Truth Evaluation of Visual Explanations

Several methods have been proposed to explain the decisions of neural ne...
research
06/30/2022

Improving Visual Grounding by Encouraging Consistent Gradient-based Explanations

We propose a margin-based loss for vision-language model pretraining tha...

Please sign up or login with your details

Forgot password? Click here to reset