Learning Continuous Chaotic Attractors with a Reservoir Computer

10/16/2021
by   Lindsay M. Smith, et al.
5

Neural systems are well known for their ability to learn and store information as memories. Even more impressive is their ability to abstract these memories to create complex internal representations, enabling advanced functions such as the spatial manipulation of mental representations. While recurrent neural networks (RNNs) are capable of representing complex information, the exact mechanisms of how dynamical neural systems perform abstraction are still not well-understood, thereby hindering the development of more advanced functions. Here, we train a 1000-neuron RNN – a reservoir computer (RC) – to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories. Further, we explain the abstraction mechanism with new theory. By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors, as quantified by an extra Lyapunov exponent equal to zero. We propose a theoretical mechanism of this abstraction by combining ideas from differentiable generalized synchronization and feedback dynamics. Our results quantify abstraction in simple neural systems, enabling us to design artificial RNNs for abstraction, and leading us towards a neural basis of abstraction.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
02/27/2019

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

We investigate the internal representations that a recurrent neural netw...
research
05/03/2020

Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example

The ability to store and manipulate information is a hallmark of computa...
research
05/17/2021

Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers

A key feature of intelligent behaviour is the ability to learn abstract ...
research
04/24/2023

Constraining Chaos: Enforcing dynamical invariants in the training of recurrent neural networks

Drawing on ergodic theory, we introduce a novel training method for mach...
research
03/13/2014

Controlling Recurrent Neural Networks by Conceptors

The human brain is a dynamical system whose extremely complex sensor-dri...
research
04/20/2021

Phase Transition Adaptation

Artificial Recurrent Neural Networks are a powerful information processi...
research
05/09/2023

Seeing double with a multifunctional reservoir computer

Multifunctional biological neural networks exploit multistability in ord...

Please sign up or login with your details

Forgot password? Click here to reset