Positive/Negative Approximate Multipliers for DNN Accelerators

07/20/2021
by   Ourania Spantidi, et al.
6

Recent Deep Neural Networks (DNNs) managed to deliver superhuman accuracy levels on many AI tasks. Several applications rely more and more on DNNs to deliver sophisticated services and DNN accelerators are becoming integral components of modern systems-on-chips. DNNs perform millions of arithmetic operations per inference and DNN accelerators integrate thousands of multiply-accumulate units leading to increased energy requirements. Approximate computing principles are employed to significantly lower the energy consumption of DNN accelerators at the cost of some accuracy loss. Nevertheless, recent research demonstrated that complex DNNs are increasingly sensitive to approximation. Hence, the obtained energy savings are often limited when targeting tight accuracy constraints. In this work, we present a dynamically configurable approximate multiplier that supports three operation modes, i.e., exact, positive error, and negative error. In addition, we propose a filter-oriented approximation method to map the weights to the appropriate modes of the approximate multiplier. Our mapping algorithm balances the positive with the negative errors due to the approximate multiplications, aiming at maximizing the energy reduction while minimizing the overall convolution error. We evaluate our approach on multiple DNNs and datasets against state-of-the-art approaches, where our method achieves 18.33 gains on average across 7 NNs on 4 different datasets for a maximum accuracy drop of only 1

READ FULL TEXT

page 1

page 4

page 7

page 8

research
07/25/2022

Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration

Deep Neural Networks (DNNs) are being heavily utilized in modern applica...
research
03/16/2022

Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey

Deep Neural Networks (DNNs) are very popular because of their high perfo...
research
06/11/2019

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

The state-of-the-art approaches employ approximate computing to improve ...
research
04/01/2019

Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks

As Deep Neural Networks (DNNs) have demonstrated superhuman performance ...
research
03/08/2022

AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

Current state-of-the-art employs approximate multipliers to address the ...
research
04/12/2021

ENOS: Energy-Aware Network Operator Search for Hybrid Digital and Compute-in-Memory DNN Accelerators

This work proposes a novel Energy-Aware Network Operator Search (ENOS) a...
research
02/03/2023

HADES: Hardware/Algorithm Co-design in DNN accelerators using Energy-efficient Approximate Alphabet Set Multipliers

Edge computing must be capable of executing computationally intensive al...

Please sign up or login with your details

Forgot password? Click here to reset