Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

10/31/2015
by   Jeff Hawkins, et al.
0

Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2015

A compact aVLSI conductance-based silicon neuron

We present an analogue Very Large Scale Integration (aVLSI) implementati...
research
01/05/2016

How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites

We propose a formal mathematical model for sparse representations and ac...
research
05/31/2023

Neuron to Graph: Interpreting Language Model Neurons at Scale

Advances in Large Language Models (LLMs) have led to remarkable capabili...
research
10/22/2018

A general learning system based on neuron bursting and tonic firing

This paper proposes a framework for the biological learning mechanism as...
research
10/24/2016

STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons

By recording multiple cells simultaneously, electrophysiologists have fo...
research
08/31/2021

Astrocytes mediate analogous memory in a multi-layer neuron-astrocytic network

Modeling the neuronal processes underlying short-term working memory rem...
research
03/27/2023

Exposing the Functionalities of Neurons for Gated Recurrent Unit Based Sequence-to-Sequence Model

The goal of this paper is to report certain scientific discoveries about...

Please sign up or login with your details

Forgot password? Click here to reset