Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

05/22/2017
by   Tayfun Gokmen, et al.
0

In a previous work we have detailed the requirements to obtain a maximal performance benefit by implementing fully connected deep neural networks (DNN) in form of arrays of resistive devices for deep learning. This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs). We show how to map the convolutional layers to RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed due to analog nature of the computations performed on the arrays effect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of RPU approach for large class of neural network architectures.

READ FULL TEXT
research
06/01/2018

Training LSTM Networks with Resistive Cross-Point Devices

In our previous work we have shown that resistive cross point devices, s...
research
04/18/2023

Heterogeneous Integration of In-Memory Analog Computing Architectures with Tensor Processing Units

Tensor processing units (TPUs), specialized hardware accelerators for ma...
research
05/06/2023

ConvPIM: Evaluating Digital Processing-in-Memory through Convolutional Neural Network Acceleration

Processing-in-memory (PIM) architectures are emerging to reduce data mov...
research
07/20/2020

Effects of Approximate Multiplication on Convolutional Neural Networks

This paper analyzes the effects of approximate multiplication when perfo...
research
09/17/2019

Algorithm for Training Neural Networks on Resistive Device Arrays

Hardware architectures composed of resistive cross-point device arrays c...
research
06/06/2019

Training large-scale ANNs on simulated resistive crossbar arrays

Accelerating training of artificial neural networks (ANN) with analog re...
research
01/31/2022

Neural Network Training with Asymmetric Crosspoint Elements

Analog crossbar arrays comprising programmable nonvolatile resistors are...

Please sign up or login with your details

Forgot password? Click here to reset