Energy Efficient Learning with Low Resolution Stochastic Domain Wall Synapse Based Deep Neural Networks

11/14/2021
by   Walid A. Misba, et al.
8

We demonstrate that extremely low resolution quantized (nominally 5-state) synapses with large stochastic variations in Domain Wall (DW) position can be both energy efficient and achieve reasonably high testing accuracies compared to Deep Neural Networks (DNNs) of similar sizes using floating precision synaptic weights. Specifically, voltage controlled DW devices demonstrate stochastic behavior as modeled rigorously with micromagnetic simulations and can only encode limited states; however, they can be extremely energy efficient during both training and inference. We show that by implementing suitable modifications to the learning algorithms, we can address the stochastic behavior as well as mitigate the effect of their low-resolution to achieve high testing accuracies. In this study, we propose both in-situ and ex-situ training algorithms, based on modification of the algorithm proposed by Hubara et al. [1] which works well with quantization of synaptic weights. We train several 5-layer DNNs on MNIST dataset using 2-, 3- and 5-state DW device as synapse. For in-situ training, a separate high precision memory unit is adopted to preserve and accumulate the weight gradients, which are then quantized to program the low precision DW devices. Moreover, a sizeable noise tolerance margin is used during the training to address the intrinsic programming noise. For ex-situ training, a precursor DNN is first trained based on the characterized DW device model and a noise tolerance margin, which is similar to the in-situ training. Remarkably, for in-situ inference the energy dissipation to program the devices is only 13 pJ per inference given that the training is performed over the entire MNIST dataset for 10 epochs.

READ FULL TEXT

page 4

page 6

page 10

page 15

page 17

research
09/12/2023

Quantized Non-Volatile Nanomagnetic Synapse based Autoencoder for Efficient Unsupervised Network Anomaly Detection

In the autoencoder based anomaly detection paradigm, implementing the au...
research
11/09/2017

Stochastic Deep Learning in Memristive Networks

We study the performance of stochastically trained deep neural networks ...
research
10/14/2019

Variation-aware Binarized Memristive Networks

The quantization of weights to binary states in Deep Neural Networks (DN...
research
02/21/2022

Variation Aware Training of Hybrid Precision Neural Networks with 28nm HKMG FeFET Based Synaptic Core

This work proposes a hybrid-precision neural network training framework ...
research
01/13/2021

Energy-Efficient Distributed Learning Algorithms for Coarsely Quantized Signals

In this work, we present an energy-efficient distributed learning framew...
research
03/04/2020

Plasticity-Enhanced Domain-Wall MTJ Neural Networks for Energy-Efficient Online Learning

Machine learning implements backpropagation via abundant training sample...
research
10/29/2019

E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving

Convolutional neural networks (CNNs) have been increasingly deployed to ...

Please sign up or login with your details

Forgot password? Click here to reset