Enabling hyperparameter optimization in sequential autoencoders for spiking neural data

08/21/2019
by   Mohammad Reza Keshtkaran, et al.
8

Continuing advances in neural interfaces have enabled simultaneous monitoring of spiking activity from hundreds to thousands of neurons. To interpret these large-scale data, several methods have been proposed to infer latent dynamic structure from high-dimensional datasets. One recent line of work uses recurrent neural networks in a sequential autoencoder (SAE) framework to uncover dynamics. SAEs are an appealing option for modeling nonlinear dynamical systems, and enable a precise link between neural activity and behavior on a single-trial basis. However, the very large parameter count and complexity of SAEs relative to other models has caused concern that SAEs may only perform well on very large training sets. We hypothesized that with a method to systematically optimize hyperparameters (HPs), SAEs might perform well even in cases of limited training data. Such a breakthrough would greatly extend their applicability. However, we find that SAEs applied to spiking neural data are prone to a particular form of overfitting that cannot be detected using standard validation metrics, which prevents standard HP searches. We develop and test two potential solutions: an alternate validation method ("sample validation") and a novel regularization method ("coordinated dropout"). These innovations prevent overfitting quite effectively, and allow us to test whether SAEs can achieve good performance on limited data through large-scale HP optimization. When applied to data from motor cortex recorded while monkeys made reaches in various directions, large-scale HP optimization allowed SAEs to better maintain performance for small dataset sizes. Our results should greatly extend the applicability of SAEs in extracting latent dynamics from sparse, multidimensional data, such as neural population spiking activity.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 6

page 7

page 8

page 9

research
08/22/2016

LFADS - Latent Factor Analysis via Dynamical Systems

Neuroscience is experiencing a data revolution in which many hundreds or...
research
06/29/2023

Decomposing spiking neural networks with Graphical Neural Activity Threads

A satisfactory understanding of information processing in spiking neural...
research
05/26/2022

Mesoscopic modeling of hidden spiking neurons

Can we use spiking neural networks (SNN) as generative models of multi-n...
research
12/25/2022

Closed-form control with spike coding networks

Efficient and robust control using spiking neural networks (SNNs) is sti...
research
12/20/2021

Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time

The event-driven and sparse nature of communication between spiking neur...
research
01/31/2023

Spyker: High-performance Library for Spiking Deep Neural Networks

Spiking neural networks (SNNs) have been recently brought to light due t...
research
06/19/2020

Oscillatory background activity implements a backbone for sampling-based computations in spiking neural networks

Various data suggest that the brain carries out probabilistic inference....

Please sign up or login with your details

Forgot password? Click here to reset