HADES: Hardware/Algorithm Co-design in DNN accelerators using Energy-efficient Approximate Alphabet Set Multipliers

02/03/2023
by   Arani Roy, et al.
0

Edge computing must be capable of executing computationally intensive algorithms, such as Deep Neural Networks (DNNs) while operating within a constrained computational resource budget. Such computations involve Matrix Vector Multiplications (MVMs) which are the dominant contributor to the memory and energy budget of DNNs. To alleviate the computational intensity and storage demand of MVMs, we propose circuit-algorithm co-design techniques with low-complexity approximate Multiply-Accumulate (MAC) units derived from the principles of Alphabet Set Multipliers (ASMs). Selection of few and proper alphabets from ASMs lead to a Multiplier-less DNN implementation, and enables encoding of low precision weights and input activations into fewer bits. To maintain accuracy under alphabet set approximations, we developed a novel ASM-alphabet aware training. The proposed low-complexity multiplication-aware algorithm was implemented In-Memory and Near-Memory with efficient shift operations to further improve the data-movement cost between memory and processing unit. We benchmark our design on CIFAR10 and ImageNet datasets for ResNet and MobileNet models and attain <1-2 precision with energy benefits of >50 counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2019

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

The state-of-the-art approaches employ approximate computing to improve ...
research
09/15/2019

TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks

The use of lower precision has emerged as a popular technique to optimiz...
research
11/25/2020

Ax-BxP: Approximate Blocked Computation for Precision-Reconfigurable Deep Neural Network Acceleration

Precision scaling has emerged as a popular technique to optimize the com...
research
07/25/2022

Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration

Deep Neural Networks (DNNs) are being heavily utilized in modern applica...
research
09/09/2022

ApproxTrain: Fast Simulation of Approximate Multipliers for DNN Training and Inference

Edge training of Deep Neural Networks (DNNs) is a desirable goal for con...
research
07/20/2021

Positive/Negative Approximate Multipliers for DNN Accelerators

Recent Deep Neural Networks (DNNs) managed to deliver superhuman accurac...
research
03/08/2022

AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

Current state-of-the-art employs approximate multipliers to address the ...

Please sign up or login with your details

Forgot password? Click here to reset