Expressive architectures enhance interpretability of dynamics-based neural population models

12/07/2022
by   Andrew R. Sedler, et al.
0

Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering three latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. We attribute this finding to the fact that NODEs allow use of multi-layer perceptrons (MLPs) of arbitrary capacity to model the vector field. Decoupling the expressivity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. The suboptimal interpretability of widely-used RNN-based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.

READ FULL TEXT

page 1

page 6

research
05/18/2023

Learning low-dimensional dynamics from whole-brain data improves task capture

The neural dynamics underlying brain activity are critical to understand...
research
06/25/2019

Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

Recurrent neural networks (RNNs) are a widely used tool for modeling seq...
research
07/07/2022

Backpropagation on Dynamical Networks

Dynamical networks are versatile models that can describe a variety of b...
research
07/12/2023

Trainability, Expressivity and Interpretability in Gated Neural ODEs

Understanding how the dynamics in biological and artificial neural netwo...
research
02/27/2023

Analyzing Populations of Neural Networks via Dynamical Model Embedding

A core challenge in the interpretation of deep neural networks is identi...
research
07/19/2019

Universality and individuality in neural dynamics across large populations of recurrent networks

Task-based modeling with recurrent neural networks (RNNs) has emerged as...
research
04/27/2023

Learning Absorption Rates in Glucose-Insulin Dynamics from Meal Covariates

Traditional models of glucose-insulin dynamics rely on heuristic paramet...

Please sign up or login with your details

Forgot password? Click here to reset