Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

11/01/2021
by   Jimmy T. H. Smith, et al.
33

Recurrent neural networks (RNNs) are powerful models for processing time-series data, but it remains challenging to understand how they function. Improving this understanding is of substantial interest to both the machine learning and neuroscience communities. The framework of reverse engineering a trained RNN by linearizing around its fixed points has provided insight, but the approach has significant challenges. These include difficulty choosing which fixed point to expand around when studying RNN dynamics and error accumulation when reconstructing the nonlinear dynamics with the linearized dynamics. We present a new model that overcomes these limitations by co-training an RNN with a novel switching linear dynamical system (SLDS) formulation. A first-order Taylor series expansion of the co-trained RNN and an auxiliary function trained to pick out the RNN's fixed points govern the SLDS dynamics. The results are a trained SLDS variant that closely approximates the RNN, an auxiliary function that can produce a fixed point for each point in state-space, and a trained nonlinear RNN whose dynamics have been regularized such that its first-order terms perform the computation, if possible. This model removes the post-training fixed point optimization and allows us to unambiguously study the learned dynamics of the SLDS at any point in state-space. It also generalizes SLDS models to continuous manifolds of switching points while sharing parameters across switches. We validate the utility of the model on two synthetic tasks relevant to previous work reverse engineering RNNs. We then show that our model can be used as a drop-in in more complex architectures, such as LFADS, and apply this LFADS hybrid to analyze single-trial spiking activity from the motor system of a non-human primate.

READ FULL TEXT

page 3

page 4

page 11

page 12

page 13

page 15

page 16

page 19

research
06/25/2019

Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

Recurrent neural networks (RNNs) are a widely used tool for modeling seq...
research
06/10/2019

Data-driven Reconstruction of Nonlinear Dynamics from Sparse Observation

We present a data-driven model to reconstruct nonlinear dynamics from a ...
research
10/14/2021

How to train RNNs on chaotic data?

Recurrent neural networks (RNNs) are wide-spread machine learning tools ...
research
12/23/2016

A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements

The computational properties of neural systems are often thought to be i...
research
09/10/2016

Multiplex visibility graphs to investigate recurrent neural networks dynamics

A recurrent neural network (RNN) is a universal approximator of dynamica...
research
02/27/2023

Analyzing Populations of Neural Networks via Dynamical Model Embedding

A core challenge in the interpretation of deep neural networks is identi...
research
07/19/2019

Universality and individuality in neural dynamics across large populations of recurrent networks

Task-based modeling with recurrent neural networks (RNNs) has emerged as...

Please sign up or login with your details

Forgot password? Click here to reset