Gradient-Free Neural Network Training via Synaptic-Level Reinforcement Learning

05/29/2021
by   Aman Bhargava, et al.
0

An ongoing challenge in neural information processing is: how do neurons adjust their connectivity to improve task performance over time (i.e., actualize learning)? It is widely believed that there is a consistent, synaptic-level learning mechanism in specific brain regions that actualizes learning. However, the exact nature of this mechanism remains unclear. Here we propose an algorithm based on reinforcement learning (RL) to generate and apply a simple synaptic-level learning policy for multi-layer perceptron (MLP) models. In this algorithm, the action space for each MLP synapse consists of a small increase, decrease, or null action on the synapse weight, and the state for each synapse consists of the last two actions and reward signals. A binary reward signal indicates improvement or deterioration in task performance. The static policy produces superior training relative to the adaptive policy and is agnostic to activation function, network shape, and task. Trained MLPs yield character recognition performance comparable to identically shaped networks trained with gradient descent. 0 hidden unit character recognition tests yielded an average validation accuracy of 88.28 the same MLP trained with gradient descent. 32 hidden unit character recognition tests yielded an average validation accuracy of 88.45 1.11±0.79 robustness and lack of reliance on gradient computations opens the door for new techniques for training difficult-to-differentiate artificial neural networks such as spiking neural networks (SNNs) and recurrent neural networks (RNNs). Further, the method's simplicity provides a unique opportunity for further development of local rule-driven multi-agent connectionist models for machine intelligence analogous to cellular automata.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2022

Learning to learn online with neuromodulated synaptic plasticity in spiking neural networks

We propose that in order to harness our understanding of neuroscience to...
research
11/21/2020

On-Chip Error-triggered Learning of Multi-layer Memristive Spiking Neural Networks

Recent breakthroughs in neuromorphic computing show that local forms of ...
research
05/12/2020

Training spiking neural networks using reinforcement learning

Neurons in the brain communicate with each other through discrete action...
research
10/15/2019

Reinforcement learning with spiking coagents

Neuroscientific theory suggests that dopaminergic neurons broadcast glob...
research
11/17/2019

Hebbian Synaptic Modifications in Spiking Neurons that Learn

In this paper, we derive a new model of synaptic plasticity, based on re...
research
12/05/2018

Neuromodulated Learning in Deep Neural Networks

In the brain, learning signals change over time and synaptic location, a...
research
10/19/2020

Every Hidden Unit Maximizing Output Weights Maximizes The Global Reward

For a network of stochastic units trained on a reinforcement learning ta...

Please sign up or login with your details

Forgot password? Click here to reset