PIM-QAT: Neural Network Quantization for Processing-In-Memory (PIM) Systems

09/18/2022
by   Qing Jin, et al.
0

Processing-in-memory (PIM), an increasingly studied neuromorphic hardware, promises orders of energy and throughput improvements for deep learning inference. Leveraging the massively parallel and efficient analog computing inside memories, PIM circumvents the bottlenecks of data movements in conventional digital hardware. However, an extra quantization step (i.e. PIM quantization), typically with limited resolution due to hardware constraints, is required to convert the analog computing results into digital domain. Meanwhile, non-ideal effects extensively exist in PIM quantization because of the imperfect analog-to-digital interface, which further compromises the inference accuracy. In this paper, we propose a method for training quantized networks to incorporate PIM quantization, which is ubiquitous to all PIM systems. Specifically, we propose a PIM quantization aware training (PIM-QAT) algorithm, and introduce rescaling techniques during backward and forward propagation by analyzing the training dynamics to facilitate training convergence. We also propose two techniques, namely batch normalization (BN) calibration and adjusted precision training, to suppress the adverse effects of non-ideal linearity and stochastic thermal noise involved in real PIM chips. Our method is validated on three mainstream PIM decomposition schemes, and physically on a prototype chip. Comparing with directly deploying conventionally trained quantized model on PIM systems, which does not take into account this extra quantization step and thus fails, our method provides significant improvement. It also achieves comparable inference accuracy on PIM systems as that of conventionally quantized models on digital hardware, across CIFAR10 and CIFAR100 datasets using various network depths for the most popular network topology.

READ FULL TEXT
research
07/22/2018

Hardware-Limited Task-Based Quantization

Quantization plays a critical role in digital signal processing systems....
research
09/05/2023

OHQ: On-chip Hardware-aware Quantization

Quantization emerges as one of the most promising approaches for deployi...
research
06/05/2023

Sensitivity-Aware Finetuning for Accuracy Recovery on Deep Learning Hardware

Existing methods to recover model accuracy on analog-digital hardware in...
research
07/03/2019

Serial Quantization for Representing Sparse Signals

Sparse signals are encountered in a broad range of applications. In orde...
research
11/11/2021

Variability-Aware Training and Self-Tuning of Highly Quantized DNNs for Analog PIM

DNNs deployed on analog processing in memory (PIM) architectures are sub...
research
01/07/2017

Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses

Brain inspired neuromorphic computing has demonstrated remarkable advant...
research
04/03/2016

A New Learning Method for Inference Accuracy, Core Occupation, and Performance Co-optimization on TrueNorth Chip

IBM TrueNorth chip uses digital spikes to perform neuromorphic computing...

Please sign up or login with your details

Forgot password? Click here to reset