DeepAxe: A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators

03/14/2023
by   Mahdi Taheri, et al.
0

While the role of Deep Neural Networks (DNNs) in a wide range of safety-critical applications is expanding, emerging DNNs experience massive growth in terms of computation power. It raises the necessity of improving the reliability of DNN accelerators yet reducing the computational burden on the hardware platforms, i.e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators. Therefore, the trade-off between hardware performance, i.e. area, power and delay, and the reliability of the DNN accelerator implementation becomes critical and requires tools for analysis. In this paper, we propose a framework DeepAxe for design space exploration for FPGA-based implementation of DNNs by considering the trilateral impact of applying functional approximation on accuracy, reliability and hardware performance. The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements. The design flow starts with a pre-trained network in Keras, uses an innovative high-level synthesis environment DeepHLS and results in a set of Pareto-optimal design space points as a guide for the designer. The framework is demonstrated in a case study of custom and state-of-the-art DNNs and datasets.

READ FULL TEXT

page 1

page 3

page 6

research
05/20/2022

QADAM: Quantization-Aware DNN Accelerator Modeling for Pareto-Optimality

As the machine learning and systems communities strive to achieve higher...
research
12/23/2020

Architecture, Dataflow and Physical Design Implications of 3D-ICs for DNN-Accelerators

The everlasting demand for higher computing power for deep neural networ...
research
05/25/2023

Are We There Yet? Product Quantization and its Hardware Acceleration

Conventional multiply-accumulate (MAC) operations have long dominated co...
research
06/12/2023

On the Viability of using LLMs for SW/HW Co-Design: An Example in Designing CiM DNN Accelerators

Deep Neural Networks (DNNs) have demonstrated impressive performance acr...
research
05/07/2021

ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked Models

With new accelerator hardware for DNN, the computing power for AI applic...
research
07/02/2018

FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design

Deep neural networks (DNN) are increasingly being accelerated on applica...
research
05/12/2021

High-Performance FPGA-based Accelerator for Bayesian Neural Networks

Neural networks (NNs) have demonstrated their potential in a wide range ...

Please sign up or login with your details

Forgot password? Click here to reset