Understanding Neural Networks through Representation Erasure

12/24/2016
by   Jiwei Li, et al.
0

While neural networks have been successfully applied to many natural language processing tasks, they come at the cost of interpretability. In this paper, we propose a general methodology to analyze and interpret decisions from a neural model by observing the effects on the model of erasing various parts of the representation, such as input word-vector dimensions, intermediate hidden units, or input words. We present several approaches to analyzing the effects of such erasure, from computing the relative difference in evaluation metrics, to using reinforcement learning to erase the minimum set of input words in order to flip a neural model's decision. In a comprehensive analysis of multiple NLP tasks, including linguistic feature classification, sentence-level sentiment analysis, and document level sentiment aspect prediction, we show that the proposed methodology not only offers clear explanations about neural model decisions, but also provides a way to conduct error analysis on neural models.

READ FULL TEXT

page 3

page 7

page 14

research
11/06/2019

SentiLR: Linguistic Knowledge Enhanced Language Representation for Sentiment Analysis

Most of the existing pre-trained language representation models neglect ...
research
09/03/2017

Investigating how well contextual features are captured by bi-directional recurrent neural network models

Learning algorithms for natural language processing (NLP) tasks traditio...
research
04/14/2021

Distributed Word Representation in Tsetlin Machine

Tsetlin Machine (TM) is an interpretable pattern recognition algorithm b...
research
10/30/2017

Understanding Hidden Memories of Recurrent Neural Networks

Recurrent neural networks (RNNs) have been successfully applied to vario...
research
10/21/2019

Human-Like Decision Making: Document-level Aspect Sentiment Classification via Hierarchical Reinforcement Learning

Recently, neural networks have shown promising results on Document-level...
research
05/24/2022

Interpretation Quality Score for Measuring the Quality of interpretability methods

Machine learning (ML) models have been applied to a wide range of natura...
research
04/17/2020

How recurrent networks implement contextual processing in sentiment analysis

Neural networks have a remarkable capacity for contextual processing–usi...

Please sign up or login with your details

Forgot password? Click here to reset