A Construction Kit for Efficient Low Power Neural Network Accelerator Designs

06/24/2021
by   Petar Jokic, et al.
0

Implementing embedded neural network processing at the edge requires efficient hardware acceleration that couples high computational performance with low power consumption. Driven by the rapid evolution of network architectures and their algorithmic features, accelerator designs are constantly updated and improved. To evaluate and compare hardware design choices, designers can refer to a myriad of accelerator implementations in the literature. Surveys provide an overview of these works but are often limited to system-level and benchmark-specific performance metrics, making it difficult to quantitatively compare the individual effect of each utilized optimization technique. This complicates the evaluation of optimizations for new accelerator designs, slowing-down the research progress. This work provides a survey of neural network accelerator optimization approaches that have been used in recent works and reports their individual effects on edge processing performance. It presents the list of optimizations and their quantitative effects as a construction kit, allowing to assess the design choices for each building block separately. Reported optimizations range from up to 10'000x memory savings to 33x energy reductions, providing chip designers an overview of design choices for implementing efficient low power neural network accelerators.

READ FULL TEXT

page 1

page 2

research
09/08/2022

Hardware Accelerator and Neural Network Co-Optimization for Ultra-Low-Power Audio Processing Devices

The increasing spread of artificial neural networks does not stop at ult...
research
04/11/2022

VWR2A: A Very-Wide-Register Reconfigurable-Array Architecture for Low-Power Embedded Devices

Edge-computing requires high-performance energy-efficient embedded syste...
research
05/03/2020

Lupulus: A Flexible Hardware Accelerator for Neural Networks

Neural networks have become indispensable for a wide range of applicatio...
research
07/08/2021

First-Generation Inference Accelerator Deployment at Facebook

In this paper, we provide a deep dive into the deployment of inference a...
research
11/18/2020

A Survey of System Architectures and Techniques for FPGA Virtualization

FPGA accelerators are gaining increasing attention in both cloud and edg...
research
04/06/2023

ImaGen: A General Framework for Generating Memory- and Power-Efficient Image Processing Accelerators

Image processing algorithms are prime targets for hardware acceleration ...
research
09/11/2019

QuTiBench: Benchmarking Neural Networks on Heterogeneous Hardware

Neural Networks have become one of the most successful universal machine...

Please sign up or login with your details

Forgot password? Click here to reset