Sign and Relevance learning

10/14/2021
by   Sama Daryanavard, et al.
0

Standard models of biologically realistic, or inspired, reinforcement learning employ a global error signal which implies shallow networks. However, deep networks could offer a drastically superior performance by feeding the error signal backwards through such a network which in turn is not biologically realistic as it requires symmetric weights between top-down and bottom-up pathways. Instead, we present a network combining local learning with global modulation where neuromodulation controls the amount of plasticity change in the whole network, while only the sign of the error is backpropagated through the network. The neuromodulation can be understood as a rectified error, or relevance, signal while the bottom-up sign of the error signal decides between long-term potentiation and long-term depression. We demonstrate the performance of this paradigm with a real robotic task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2023

Prime and Modulate Learning: Generation of forward models with signed back-propagation and environmental cues

Deep neural networks employing error back-propagation for learning can s...
research
11/05/2018

A Biologically Plausible Learning Rule for Deep Learning in the Brain

Researchers have proposed that deep learning, which is providing importa...
research
06/06/2020

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias

Equilibrium Propagation (EP) is a biologically-inspired algorithm for co...
research
07/23/2020

Sign-curing local Hamiltonians: termwise versus global stoquasticity and the use of Clifford transformations

We elucidate the distinction between global and termwise stoquasticity f...
research
05/23/2019

Population-based Global Optimisation Methods for Learning Long-term Dependencies with RNNs

Despite recent innovations in network architectures and loss functions, ...
research
09/12/2016

A Threshold-based Scheme for Reinforcement Learning in Neural Networks

A generic and scalable Reinforcement Learning scheme for Artificial Neur...
research
10/19/2020

Every Hidden Unit Maximizing Output Weights Maximizes The Global Reward

For a network of stochastic units trained on a reinforcement learning ta...

Please sign up or login with your details

Forgot password? Click here to reset