The Computational Structure of Spike Trains

12/30/2009
by   Robert Haslinger, et al.
0

Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2019

Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits

Winner-Take-All (WTA) refers to the neural operation that selects a (typ...
research
04/18/2015

Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

The mutual information between stimulus and spike-train response is comm...
research
10/23/2020

Rescuing neural spike train models from bad MLE

The standard approach to fitting an autoregressive spike train model is ...
research
02/01/1999

Cortical Potential Distributions and Cognitive Information Processing

The use of cortical field potentials rather than the details of spike tr...
research
10/27/2022

Closed-form modeling of neuronal spike train statistics using multivariate Hawkes cumulants

We derive exact analytical expressions for the cumulants of any orders o...
research
05/19/2022

Latency correction in sparse neuronal spike trains

Background: In neurophysiological data, latency refers to a global shift...
research
03/30/2020

Critical Limits in a Bump Attractor Network of Spiking Neurons

A bump attractor network is a model that implements a competitive neuron...

Please sign up or login with your details

Forgot password? Click here to reset