TxSim:Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

02/25/2020
by   Sourjya Roy, et al.
3

Resistive crossbars have attracted significant interest in the design of Deep Neural Network (DNN) accelerators due to their ability to natively execute massively parallel vector-matrix multiplications within dense memory arrays. However, crossbar-based computations face a major challenge due to a variety of device and circuit-level non-idealities, which manifest as errors in the vector-matrix multiplications and eventually degrade DNN accuracy. To address this challenge, there is a need for tools that can model the functional impact of non-idealities on DNN training and inference. Existing efforts towards this goal are either limited to inference, or are too slow to be used for large-scale DNN training. We propose TxSim, a fast and customizable modeling framework to functionally evaluate DNN training on crossbar-based hardware considering the impact of non-idealities. The key features of TxSim that differentiate it from prior efforts are: (i) It comprehensively models non-idealities during all training operations (forward propagation, backward propagation, and weight update) and (ii) it achieves computational efficiency by mapping crossbar evaluations to well-optimized BLAS routines and incorporates speedup techniques to further reduce simulation time with minimal impact on accuracy. TxSim achieves orders-of-magnitude improvement in simulation speed over prior works, and thereby makes it feasible to evaluate training of large-scale DNNs on crossbars. Our experiments using TxSim reveal that the accuracy degradation in DNN training due to non-idealities can be substantial (3 research in mitigation techniques. We also analyze the impact of various device and circuit-level parameters and the associated non-idealities to provide key insights that can guide the design of crossbar-based DNN training accelerators.

READ FULL TEXT

page 1

page 3

research
08/31/2018

Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars

Deep Neural Networks (DNNs) are widely used to perform machine learning ...
research
03/13/2020

DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training

DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (...
research
05/03/2022

MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators

Memristors enable the computation of matrix-vector multiplications (MVM)...
research
02/16/2023

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Analog in-memory computing (AIMC) – a promising approach for energy-effi...
research
01/18/2022

Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

The impact of device and circuit-level effects in mixed-signal Resistive...
research
07/20/2022

SumMerge: an efficient algorithm and implementation for weight repetition-aware DNN inference

Deep Neural Network (DNN) inference efficiency is a key concern across t...
research
03/23/2016

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

In recent years, deep neural networks (DNN) have demonstrated significan...

Please sign up or login with your details

Forgot password? Click here to reset