Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars

08/31/2018
by   Shubham Jain, et al.
0

Deep Neural Networks (DNNs) are widely used to perform machine learning tasks in speech, image, and natural language processing. The high computation and storage demands of DNNs have led to a need for energy-efficient implementations. Resistive crossbar systems have emerged as promising candidates due to their ability to compactly and efficiently realize the primitive DNN operation, viz., vector-matrix multiplication. However, in practice, the functionality of resistive crossbars may deviate considerably from the ideal abstraction due to the device and circuit level non-idealities such as driver resistance, sensing resistance, sneak paths, and interconnect parasitics. Although DNNs are somewhat tolerant to errors in their computations, it is still essential to evaluate the impact of errors introduced due to crossbar non-idealities on DNN accuracy. Unfortunately, device and circuit-level models are not feasible to use in the context of large-scale DNNs with 2.6-15.5 billion synaptic connections. In this work, we present a fast and accurate simulation framework to enable training and evaluation of large-scale DNNs on resistive crossbar based hardware fabrics. We propose a Fast Crossbar Model (FCM) that accurately captures the errors arising due to non-idealities while being five orders of magnitude faster than circuit simulation. We develop Rx-Caffe, an enhanced version of the popular Caffe machine learning software framework to train and evaluate DNNs on crossbars. We use Rx-Caffe to evaluate large-scale image recognition DNNs designed for the ImageNet dataset. Our experiments reveal that crossbar non-idealities can significantly degrade DNN accuracy by 9.6 the best of our knowledge, this work is the first evaluation of the accuracy for large-scale DNNs on resistive crossbars and highlights the need for further efforts to address the impact of non-idealities.

READ FULL TEXT
research
02/25/2020

TxSim:Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

Resistive crossbars have attracted significant interest in the design of...
research
12/02/2017

LightNN: Filling the Gap between Conventional Deep Neural Networks and Binarized Networks

Application-specific integrated circuit (ASIC) implementations for Deep ...
research
04/04/2017

DyVEDeep: Dynamic Variable Effort Deep Neural Networks

Deep Neural Networks (DNNs) have advanced the state-of-the-art in a vari...
research
04/05/2023

Adopting Two Supervisors for Efficient Use of Large-Scale Remote Deep Neural Networks

Recent decades have seen the rise of large-scale Deep Neural Networks (D...
research
02/08/2023

CRAFT: Criticality-Aware Fault-Tolerance Enhancement Techniques for Emerging Memories-Based Deep Neural Networks

Deep Neural Networks (DNNs) have emerged as the most effective programmi...
research
10/28/2020

Designing Interpretable Approximations to Deep Reinforcement Learning with Soft Decision Trees

In an ever expanding set of research and application areas, deep neural ...
research
04/30/2019

FastContext: an efficient and scalable implementation of the ConText algorithm

Objective: To develop and evaluate FastContext, an efficient, scalable i...

Please sign up or login with your details

Forgot password? Click here to reset