Training Dynamical Binary Neural Networks with Equilibrium Propagation

03/16/2021
by   Jérémie Laydevant, et al.
15

Equilibrium Propagation (EP) is an algorithm intrinsically adapted to the training of physical networks, thanks to the local updates of weights given by the internal dynamics of the system. However, the construction of such a hardware requires to make the algorithm compatible with existing neuromorphic CMOS technologies, which generally exploit digital communication between neurons and offer a limited amount of local memory. In this work, we demonstrate that EP can train dynamical networks with binary activations and weights. We first train systems with binary weights and full-precision activations, achieving an accuracy equivalent to that of full-precision models trained by standard EP on MNIST, and losing only 1.9 equal architecture. We then extend our method to the training of models with binary activations and weights on MNIST, achieving an accuracy within 1 full-precision reference for fully connected architectures and reaching the full-precision accuracy for convolutional architectures. Our extension of EP to binary networks opens new solutions for on-chip learning and provides a compact framework for training BNNs end-to-end with the same circuitry as for inference.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2018

Training Compact Neural Networks with Binary Weights and Low Precision Activations

In this paper, we propose to train a network with binary weights and low...
research
05/22/2023

Training an Ising Machine with Equilibrium Propagation

Ising machines, which are hardware implementations of the Ising model of...
research
03/17/2020

Efficient Bitwidth Search for Practical Mixed Precision Neural Network

Network quantization has rapidly become one of the most widely used meth...
research
10/21/2017

Learning Discrete Weights Using the Local Reparameterization Trick

Recent breakthroughs in computer vision make use of large deep neural ne...
research
05/25/2017

Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework

There is a pressing need to build an architecture that could subsume the...
research
11/30/2019

A binary-activation, multi-level weight RNN and training algorithm for processing-in-memory inference with eNVM

We present a new algorithm for training neural networks with binary acti...
research
09/13/2018

High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training

Neuromorphic Multiply-And-Accumulate (MAC) circuits utilizing synaptic w...

Please sign up or login with your details

Forgot password? Click here to reset