Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration

07/25/2022
by   Ourania Spantidi, et al.
0

Deep Neural Networks (DNNs) are being heavily utilized in modern applications and are putting energy-constraint devices to the test. To bypass high energy consumption issues, approximate computing has been employed in DNN accelerators to balance out the accuracy-energy reduction trade-off. However, the approximation-induced accuracy loss can be very high and drastically degrade the performance of the DNN. Therefore, there is a need for a fine-grain mechanism that would assign specific DNN operations to approximation in order to maintain acceptable DNN accuracy, while also achieving low energy consumption. In this paper, we present an automated framework for weight-to-approximation mapping enabling formal property exploration for approximate DNN accelerators. At the MAC unit level, our experimental evaluation surpassed already energy-efficient mappings by more than ×2 in terms of energy gains, while also supporting significantly more fine-grain control over the introduced approximation.

READ FULL TEXT

page 1

page 9

page 11

research
07/20/2021

Positive/Negative Approximate Multipliers for DNN Accelerators

Recent Deep Neural Networks (DNNs) managed to deliver superhuman accurac...
research
12/02/2017

LightNN: Filling the Gap between Conventional Deep Neural Networks and Binarized Networks

Application-specific integrated circuit (ASIC) implementations for Deep ...
research
11/07/2022

LOCAL: Low-Complex Mapping Algorithm for Spatial DNN Accelerators

Deep neural networks are a promising solution for applications that solv...
research
08/29/2022

AMR-MUL: An Approximate Maximally Redundant Signed Digit Multiplier

In this paper, we present an energy-efficient, yet high-speed approximat...
research
11/28/2021

Enabling Fast Deep Learning on Tiny Energy-Harvesting IoT Devices

Energy harvesting (EH) IoT devices that operate intermittently without b...
research
02/03/2023

HADES: Hardware/Algorithm Co-design in DNN accelerators using Energy-efficient Approximate Alphabet Set Multipliers

Edge computing must be capable of executing computationally intensive al...
research
06/14/2017

MATIC: Adaptation and In-situ Canaries for Energy-Efficient Neural Network Acceleration

- The primary author has withdrawn this paper due to conflict of interes...

Please sign up or login with your details

Forgot password? Click here to reset