A Radically New Theory of how the Brain Represents and Computes with Probabilities

01/26/2017
by   Gerard Rinkus, et al.
0

The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed code (SDC) (cell assembly, ensemble), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are not inherently noisy; but rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, indirectly yielding correlation patterns. The theory, Sparsey, was introduced 20 years ago as a canonical cortical circuit/algorithm model, but not elaborated as an alternative to PPC theories. Here, we show that the active SDC simultaneously represents both the most similar/likely input and the coarsely-ranked distribution over all stored inputs (hypotheses). Crucially, Sparsey's code selection algorithm (CSA), used for both learning and inference, achieves this with a single pass over the weights for each successive item of a sequence, thus performing spatiotemporal pattern learning/inference with a number of steps that remains constant as the number of stored items increases. We also discuss our approach as a radically new implementation of graphical probability modeling.

READ FULL TEXT

page 11

page 22

research
10/19/2020

Efficient Similarity-Preserving Unsupervised Learning using Modular Sparse Distributed Codes and Novelty-Contingent Noise

There is increasing realization in neuroscience that information is repr...
research
07/15/2017

Quantum Computation via Sparse Distributed Representation

Quantum superposition says that any physical system simultaneously exist...
research
11/12/2016

Sparsey: Event Recognition via Deep Hierarchical Spare Distributed Codes

Visual cortex's hierarchical, multi-level organization is captured in ma...
research
01/13/2018

A Capacity-Achieving PIR Protocol for Distributed Storage Using an Arbitrary Linear Code

We propose a private information retrieval (PIR) protocol for distribute...
research
06/10/2022

On some properties of random and pseudorandom codes

We describe some pseudorandom properties of binary linear codes achievin...
research
08/11/2019

Population rate coding in recurrent neuronal networks with undetermined-type neurons

Neural coding is a key problem in neuroscience, which can promote people...
research
09/18/2019

Generation mechanism of cell assembly to store information about hand recognition

A specific memory is stored in a cell assembly that is activated during ...

Please sign up or login with your details

Forgot password? Click here to reset