DeepSeer: Interactive RNN Explanation and Debugging via State Abstraction

03/02/2023
by   Zhijie Wang, et al.
0

Recurrent Neural Networks (RNNs) have been widely used in Natural Language Processing (NLP) tasks given its superior performance on processing sequential data. However, it is challenging to interpret and debug RNNs due to the inherent complexity and the lack of transparency of RNNs. While many explainable AI (XAI) techniques have been proposed for RNNs, most of them only support local explanations rather than global explanations. In this paper, we present DeepSeer, an interactive system that provides both global and local explanations of RNN behavior in multiple tightly-coordinated views for model understanding and debugging. The core of DeepSeer is a state abstraction method that bundles semantically similar hidden states in an RNN model and abstracts the model as a finite state machine. Users can explore the global model behavior by inspecting text patterns associated with each state and the transitions between states. Users can also dive into individual predictions by inspecting the state trace and intermediate prediction results of a given input. A between-subjects user study with 28 participants shows that, compared with a popular XAI technique, LIME, participants using DeepSeer made a deeper and more comprehensive assessment of RNN model behavior, identified the root causes of incorrect predictions more accurately, and came up with more actionable plans to improve the model performance.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

page 11

page 13

page 19

research
10/30/2017

Understanding Hidden Memories of Recurrent Neural Networks

Recurrent neural networks (RNNs) have been successfully applied to vario...
research
02/07/2017

Comparative Study of CNN and RNN for Natural Language Processing

Deep neural networks (DNN) have revolutionized the field of natural lang...
research
03/02/2023

DeepLens: Interactive Out-of-distribution Data Detection in NLP Models

Machine Learning (ML) has been widely used in Natural Language Processin...
research
08/05/2018

LISA: Explaining Recurrent Neural Network Judgments via Layer-wIse Semantic Accumulation and Example to Pattern Transformation

Recurrent neural networks (RNNs) are temporal networks and cumulative in...
research
08/03/2017

Revisiting Activation Regularization for Language RNNs

Recurrent neural networks (RNNs) serve as a fundamental building block f...
research
05/05/2022

Implicit N-grams Induced by Recurrence

Although self-attention based models such as Transformers have achieved ...
research
03/27/2019

On Attribution of Recurrent Neural Network Predictions via Additive Decomposition

RNN models have achieved the state-of-the-art performance in a wide rang...

Please sign up or login with your details

Forgot password? Click here to reset