Learning with Delayed Synaptic Plasticity

03/22/2019
by   Anil Yaman, et al.
12

The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their inner configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and a reinforcement signal received from the environment. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse and keep track of the activation of the neurons while the network performs a certain task for an episode. Delayed reinforcement signals are provided after each episode, based on the performance of the network relative to its performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm which does not incorporate domain knowledge introduced with the NATs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset