Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

05/03/2020
by   Jason Z. Kim, et al.
18

The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems perform analogous functions despite flexible organization and unstructured sensory input. Recent efforts have made progress in modeling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed, or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation, and bifurcation of this representation far beyond the training data by changing the control signal. Finally, we provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, allowing for the principled study and precise design of RNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2021

Learning Continuous Chaotic Attractors with a Reservoir Computer

Neural systems are well known for their ability to learn and store infor...
research
08/24/2017

Learning the Enigma with Recurrent Neural Networks

Recurrent neural networks (RNNs) represent the state of the art in trans...
research
06/14/2018

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks

Recurrent neural networks have gained widespread use in modeling sequenc...
research
02/27/2019

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

We investigate the internal representations that a recurrent neural netw...
research
11/29/2016

Capacity and Trainability in Recurrent Neural Networks

Two potential bottlenecks on the expressiveness of recurrent neural netw...
research
03/02/2019

Equilibrated Recurrent Neural Network: Neuronal Time-Delayed Self-Feedback Improves Accuracy and Stability

We propose a novel Equilibrated Recurrent Neural Network (ERNN) to comb...
research
06/11/2020

Deep Differential System Stability – Learning advanced computations from examples

Can advanced mathematical computations be learned from examples? Using t...

Please sign up or login with your details

Forgot password? Click here to reset