In-situ Stochastic Training of MTJ Crossbar based Neural Networks

06/24/2018
by   Ankit Mondal, et al.
0

Owing to high device density, scalability and non-volatility, Magnetic Tunnel Junction-based crossbars have garnered significant interest for implementing the weights of an artificial neural network. The existence of only two stable states in MTJs implies a high overhead of obtaining optimal binary weights in software. We illustrate that the inherent parallelism in the crossbar structure makes it highly appropriate for in-situ training, wherein the network is taught directly on the hardware. It leads to significantly smaller training overhead as the training time is independent of the size of the network, while also circumventing the effects of alternate current paths in the crossbar and accounting for manufacturing variations in the device. We show how the stochastic switching characteristics of MTJs can be leveraged to perform probabilistic weight updates using the gradient descent algorithm. We describe how the update operations can be performed on crossbars both with and without access transistors and perform simulations on them to demonstrate the effectiveness of our techniques. The results reveal that stochastically trained MTJ-crossbar NNs achieve a classification accuracy nearly same as that of real-valued-weight networks trained in software and exhibit immunity to device variations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2023

Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

We present multiplexed gradient descent (MGD), a gradient descent framew...
research
11/22/2021

Shape-Dependent Multi-Weight Magnetic Artificial Synapses for Neuromorphic Computing

In neuromorphic computing, artificial synapses provide a multi-weight co...
research
12/16/2021

Implementation of a Binary Neural Network on a Passive Array of Magnetic Tunnel Junctions

The increasing scale of neural networks and their growing application sp...
research
05/23/2023

Bulk-Switching Memristor-based Compute-In-Memory Module for Deep Neural Network Training

The need for deep neural network (DNN) models with higher performance an...
research
12/09/2022

Reminding Forgetful Organic Neuromorphic Device Networks

Organic neuromorphic device networks can accelerate neural network algor...
research
09/17/2019

Algorithm for Training Neural Networks on Resistive Device Arrays

Hardware architectures composed of resistive cross-point device arrays c...

Please sign up or login with your details

Forgot password? Click here to reset