Understanding Hidden Memories of Recurrent Neural Networks

10/30/2017
by   Yao Ming, et al.
0

Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts.

READ FULL TEXT

page 1

page 5

page 8

page 9

page 10

page 15

research
02/07/2017

Comparative Study of CNN and RNN for Natural Language Processing

Deep neural networks (DNN) have revolutionized the field of natural lang...
research
03/02/2023

DeepSeer: Interactive RNN Explanation and Debugging via State Abstraction

Recurrent Neural Networks (RNNs) have been widely used in Natural Langua...
research
08/01/2019

Visualizing RNN States with Predictive Semantic Encodings

Recurrent Neural Networks are an effective and prevalent tool used to mo...
research
05/05/2022

Implicit N-grams Induced by Recurrence

Although self-attention based models such as Transformers have achieved ...
research
08/05/2018

LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation

Recurrent neural networks (RNNs) are temporal networks and cumulative in...
research
06/03/2016

Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations

We propose zoneout, a novel method for regularizing RNNs. At each timest...
research
12/24/2016

Understanding Neural Networks through Representation Erasure

While neural networks have been successfully applied to many natural lan...

Please sign up or login with your details

Forgot password? Click here to reset