ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

06/11/2019
by   Vojtech Mrazek, et al.
0

The state-of-the-art approaches employ approximate computing to improve the energy consumption of DNN hardware. Approximate DNNs then require "extensive retraining" afterwards to recover from the accuracy loss caused by the use of approximate operations. However, retraining of complex DNNs does not scale well. In this paper, we demonstrate that efficient approximations can be introduced into the computational path of DNN accelerators "while retraining can completely be avoided". ALWANN provides highly optimized implementations of DNNs for custom low-power accelerators in which the number of computing units is lower than the number of DNN layers. First, a fully trained DNN (e.g. in TensorFlow) is converted to operate with 8-bit weights and 8-bit multipliers in convolutional layers. A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized. The optimizations including the multiplier selection problem are solved by means of a multiobjective optimization NSGA-II algorithm. In order to completely avoid the computationally expensive retraining of DNN, which is usually employed to improve the classification accuracy, we propose a simple weight updating scheme that compensates the inaccuracy introduced by employing approximate multipliers. The proposed approach is evaluated for two architectures of DNN accelerators executing three versions of ResNet on CIFAR-10 and with approximate multipliers taken from the open-source EvoApprox library. The proposed technique and approximate layers will be made available as an open-source extension of TensorFlow framework.

READ FULL TEXT
research
09/09/2022

ApproxTrain: Fast Simulation of Approximate Multipliers for DNN Training and Inference

Edge training of Deep Neural Networks (DNNs) is a desirable goal for con...
research
10/08/2022

Low Error-Rate Approximate Multiplier Design for DNNs with Hardware-Driven Co-Optimization

In this paper, two approximate 3*3 multipliers are proposed and the synt...
research
02/03/2023

HADES: Hardware/Algorithm Co-design in DNN accelerators using Energy-efficient Approximate Alphabet Set Multipliers

Edge computing must be capable of executing computationally intensive al...
research
07/20/2021

Positive/Negative Approximate Multipliers for DNN Accelerators

Recent Deep Neural Networks (DNNs) managed to deliver superhuman accurac...
research
02/18/2021

Control Variate Approximation for DNN Accelerators

In this work, we introduce a control variate approximation technique for...
research
03/08/2022

AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

Current state-of-the-art employs approximate multipliers to address the ...
research
11/25/2020

Ax-BxP: Approximate Blocked Computation for Precision-Reconfigurable Deep Neural Network Acceleration

Precision scaling has emerged as a popular technique to optimize the com...

Please sign up or login with your details

Forgot password? Click here to reset