Online Memorization of Random Firing Sequences by a Recurrent Neural Network

01/09/2020
by   Patrick Murer, et al.
7

This paper studies the capability of a recurrent neural network model to memorize random dynamical firing patterns by a simple local learning rule. Two modes of learning/memorization are considered: The first mode is strictly online, with a single pass through the data, while the second mode uses multiple passes through the data. In both modes, the learning is strictly local (quasi-Hebbian): At any given time step, only the weights between the neurons firing (or supposed to be firing) at the previous time step and those firing (or supposed to be firing) at the present time step are modified. The main result of the paper is an upper bound on the probability that the single-pass memorization is not perfect. It follows that the memorization capacity in this mode asymptotically scales like that of the classical Hopfield model (which, in contrast, memorizes static patterns). However, multiple-rounds memorization is shown to achieve a higher capacity (with a nonvanishing number of bits per connection/synapse). These mathematical findings may be helpful for understanding the functions of short-term memory and long-term memory in neuroscience.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2020

Learning Various Length Dependence by Dual Recurrent Neural Networks

Recurrent neural networks (RNNs) are widely used as a memory model for s...
research
10/11/2019

Deep Independently Recurrent Neural Network (IndRNN)

Recurrent neural networks (RNNs) are known to be difficult to train due ...
research
01/14/2017

Long Timescale Credit Assignment in NeuralNetworks with External Memory

Credit assignment in traditional recurrent neural networks usually invol...
research
09/07/2017

Adaptive PCA for Time-Varying Data

In this paper, we present an online adaptive PCA algorithm that is able ...
research
10/25/2022

Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Network

Overparameterization in deep learning typically refers to settings where...
research
12/29/2019

Approximating intractable short ratemodel distribution with neural network

We propose an algorithm which predicts each subsequent time step relativ...
research
07/22/2012

A New Training Algorithm for Kanerva's Sparse Distributed Memory

The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) ...

Please sign up or login with your details

Forgot password? Click here to reset