Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

06/25/2019
by   Niru Maheswaranathan, et al.
1

Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it--to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2020

The geometry of integration in text classification RNNs

Despite the widespread application of recurrent neural networks (RNNs) a...
research
11/01/2021

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

Recurrent neural networks (RNNs) are powerful models for processing time...
research
07/19/2019

Universality and individuality in neural dynamics across large populations of recurrent networks

Task-based modeling with recurrent neural networks (RNNs) has emerged as...
research
04/17/2020

How recurrent networks implement contextual processing in sentiment analysis

Neural networks have a remarkable capacity for contextual processing–usi...
research
12/07/2022

Expressive architectures enhance interpretability of dynamics-based neural population models

Artificial neural networks that can recover latent dynamics from recorde...
research
05/05/2022

Implicit N-grams Induced by Recurrence

Although self-attention based models such as Transformers have achieved ...
research
08/23/2023

Characterising representation dynamics in recurrent neural networks for object recognition

Recurrent neural networks (RNNs) have yielded promising results for both...

Please sign up or login with your details

Forgot password? Click here to reset