Negative Feedback Training: A Novel Concept to Improve Robustness of NVCiM DNN Accelerators

05/23/2023
by   Yifan Qin, et al.
0

Compute-in-Memory (CiM) utilizing non-volatile memory (NVM) devices presents a highly promising and efficient approach for accelerating deep neural networks (DNNs). By concurrently storing network weights and performing matrix operations within the same crossbar structure, CiM accelerators offer DNN inference acceleration with minimal area requirements and exceptional energy efficiency. However, the stochasticity and intrinsic variations of NVM devices often lead to performance degradation, such as reduced classification accuracy, compared to expected outcomes. Although several methods have been proposed to mitigate device variation and enhance robustness, most of them rely on overall modulation and lack constraints on the training process. Drawing inspiration from the negative feedback mechanism, we introduce a novel training approach that uses a multi-exit mechanism as negative feedback to enhance the performance of DNN models in the presence of device variation. Our negative feedback training method surpasses state-of-the-art techniques by achieving an impressive improvement of up to 12.49 device variation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2023

Improving Realistic Worst-Case Performance of NVCiM DNN Accelerators through Training with Right-Censored Gaussian Noise

Compute-in-Memory (CiM), built upon non-volatile memory (NVM) devices, i...
research
05/25/2022

On the Reliability of Computing-in-Memory Accelerators for Deep Neural Networks

Computing-in-memory with emerging non-volatile memory (nvCiM) is shown t...
research
07/15/2022

Computing-In-Memory Neural Network Accelerators for Safety-Critical Systems: Can Small Device Variations Be Disastrous?

Computing-in-Memory (CiM) architectures based on emerging non-volatile m...
research
03/13/2020

DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training

DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (...
research
10/14/2019

Variation-aware Binarized Memristive Networks

The quantization of weights to binary states in Deep Neural Networks (DN...
research
06/14/2017

MATIC: Adaptation and In-situ Canaries for Energy-Efficient Neural Network Acceleration

- The primary author has withdrawn this paper due to conflict of interes...
research
06/01/2017

CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks

Accelerating the inference of a trained DNN is a well studied subject. I...

Please sign up or login with your details

Forgot password? Click here to reset