Algorithm for Training Neural Networks on Resistive Device Arrays

09/17/2019
by   Tayfun Gokmen, et al.
0

Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware however strongly depends on the switching characteristics of the cross-point elements. One of the key requirements is that these resistive devices must change conductance in a symmetrical fashion when subjected to positive or negative pulse stimuli. Here, we present a new training algorithm, so-called the "Tiki-Taka" algorithm, that eliminates this stringent symmetry requirement. We show that device asymmetry introduces an unintentional implicit cost term into the SGD algorithm, whereas in the "Tiki-Taka" algorithm a coupled dynamical system simultaneously minimizes the original objective function of the neural network and the unintentional cost term due to device asymmetry in a self-consistent fashion. We tested the validity of this new algorithm on a range of network architectures such as fully connected, convolutional and LSTM networks. Simulation results on these various networks show that whatever accuracy is achieved using the conventional SGD algorithm with symmetric (ideal) device switching characteristics the same accuracy is also achieved using the "Tiki-Taka" algorithm with non-symmetric (non-ideal) device switching characteristics. Moreover, all the operations performed on the arrays are still parallel and therefore the implementation cost of this new algorithm on array architectures is minimal; and it maintains the aforementioned power and speed benefits. These algorithmic improvements are crucial to relax the material specification and to realize technologically viable resistive crossbar arrays that outperform digital accelerators for similar training tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2018

Training LSTM Networks with Resistive Cross-Point Devices

In our previous work we have shown that resistive cross point devices, s...
research
07/24/2019

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

A resistive memory device-based computing architecture is one of the pro...
research
01/31/2022

Neural Network Training with Asymmetric Crosspoint Elements

Analog crossbar arrays comprising programmable nonvolatile resistors are...
research
05/22/2017

Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

In a previous work we have detailed the requirements to obtain a maximal...
research
04/05/2021

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

We introduce the IBM Analog Hardware Acceleration Kit, a new and first o...
research
06/06/2019

Training large-scale ANNs on simulated resistive crossbar arrays

Accelerating training of artificial neural networks (ANN) with analog re...
research
06/24/2018

In-situ Stochastic Training of MTJ Crossbar based Neural Networks

Owing to high device density, scalability and non-volatility, Magnetic T...

Please sign up or login with your details

Forgot password? Click here to reset