Reservoirs learn to learn

09/16/2019
by   Anand Subramoney, et al.
0

We consider reservoirs in the form of liquid state machines, i.e., recurrently connected networks of spiking neurons with randomly chosen weights. So far only the weights of a linear readout were adapted for a specific task. We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly. After all, weights of recurrent connections in the brain are also not assumed to be randomly chosen. Rather, these weights were probably optimized during evolution, development, and prior learning experiences for specific task domains. In order to examine the benefits of choosing recurrent weights within a liquid with a purpose, we applied the Learning-to-Learn (L2L) paradigm to our model: We optimized the weights of the recurrent connections -- and hence the dynamics of the liquid state machine -- for a large family of potential learning tasks, which the network might have to learn later through modification of the weights of readout neurons. We found that this two-tiered process substantially improves the learning speed of liquid state machines for specific tasks. In fact, this learning speed increases further if one does not train the weights of linear readouts at all, and relies instead on the internal dynamics and fading memory of the network for remembering salient information that it could extract from preceding examples for the current learning task. This second type of learning has recently been proposed to underlie fast learning in the prefrontal cortex and motor cortex, and hence it is of interest to explore its performance also in models. Since liquid state machines share many properties with other types of reservoirs, our results raise the question whether L2L conveys similar benefits also to these other reservoirs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2017

Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

Brains need to predict how the body reacts to motor commands. It is an o...
research
06/04/2019

Reinforcement Learning with Low-Complexity Liquid State Machines

We propose reinforcement learning on simple networks consisting of rando...
research
05/17/2016

Dataflow matrix machines as programmable, dynamically expandable, self-referential generalized recurrent neural networks

Dataflow matrix machines are a powerful generalization of recurrent neur...
research
03/21/2017

A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

Information in neural networks is represented as weighted connections, o...
research
09/11/2021

An Insect-Inspired Randomly, Weighted Neural Network with Random Fourier Features For Neuro-Symbolic Relational Learning

Insects, such as fruit flies and honey bees, can solve simple associativ...
research
12/14/2012

Evolution of Plastic Learning in Spiking Networks via Memristive Connections

This article presents a spiking neuroevolutionary system which implement...
research
01/18/2019

Predicting Performance using Approximate State Space Model for Liquid State Machines

Liquid State Machine (LSM) is a brain-inspired architecture used for sol...

Please sign up or login with your details

Forgot password? Click here to reset