DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training

03/13/2020
by   Xiaochen Peng, et al.
0

DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (CIM) accelerators for deep neural networks, with hierarchical design options from device-level, to circuit-level and up to algorithm-level. A python wrapper is developed to interface NeuroSim with a popular machine learning platform: Pytorch, to support flexible network structures. The framework provides automatic algorithm-to-hardware mapping, and evaluates chip-level area, energy efficiency and throughput for training or inference, as well as training/inference accuracy with hardware constraints. Our prior work (DNN+NeuroSim V1.1) was developed to estimate the impact of reliability in synaptic devices, and analog-to-digital converter (ADC) quantization loss on the accuracy and hardware performance of inference engines. In this work, we further investigated the impact of the analog emerging non-volatile memory non-ideal device properties for on-chip training. By introducing the nonlinearity, asymmetry, device-to-device and cycle-to-cycle variation of weight update into the python wrapper, and peripheral circuits for error/weight gradient computation in NeuroSim core, we benchmarked CIM accelerators based on state-of-the-art SRAM and eNVM devices for VGG-8 on CIFAR-10 dataset, revealing the crucial specs of synaptic devices for on-chip training. The proposed DNN+NeuroSim V2.0 framework is available on GitHub.

READ FULL TEXT

page 2

page 5

page 6

page 7

page 9

page 11

page 12

page 13

research
05/23/2023

Bulk-Switching Memristor-based Compute-In-Memory Module for Deep Neural Network Training

The need for deep neural network (DNN) models with higher performance an...
research
05/01/2023

Modeling and Analysis of Analog Non-Volatile Devices for Compute-In-Memory Applications

This paper introduces a novel simulation tool for analyzing and training...
research
02/25/2020

TxSim:Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

Resistive crossbars have attracted significant interest in the design of...
research
04/13/2021

Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune

Compute-in-memory (CIM) has been proposed to accelerate the convolution ...
research
06/04/2020

Counting Cards: Exploiting Weight and Variance Distributions for Robust Compute In-Memory

Compute in-memory (CIM) is a promising technique that minimizes data tra...
research
05/23/2023

Negative Feedback Training: A Novel Concept to Improve Robustness of NVCiM DNN Accelerators

Compute-in-Memory (CiM) utilizing non-volatile memory (NVM) devices pres...

Please sign up or login with your details

Forgot password? Click here to reset