Hardware Accelerator and Neural Network Co-Optimization for Ultra-Low-Power Audio Processing Devices

09/08/2022
by   Christoph Gerum, et al.
0

The increasing spread of artificial neural networks does not stop at ultralow-power edge devices. However, these very often have high computational demand and require specialized hardware accelerators to ensure the design meets power and performance constraints. The manual optimization of neural networks along with the corresponding hardware accelerators can be very challenging. This paper presents HANNAH (Hardware Accelerator and Neural Network seArcH), a framework for automated and combined hardware/software co-design of deep neural networks and hardware accelerators for resource and power-constrained edge devices. The optimization approach uses an evolution-based search algorithm, a neural network template technique, and analytical KPI models for the configurable UltraTrail hardware accelerator template to find an optimized neural network and accelerator configuration. We demonstrate that HANNAH can find suitable neural networks with minimized power consumption and high accuracy for different audio classification tasks such as single-class wake word detection, multi-class keyword detection, and voice activity detection, which are superior to the related work.

READ FULL TEXT

page 1

page 4

research
03/05/2020

Accelerator-aware Neural Network Design using AutoML

While neural network hardware accelerators provide a substantial amount ...
research
06/24/2021

A Construction Kit for Efficient Low Power Neural Network Accelerator Designs

Implementing embedded neural network processing at the edge requires eff...
research
11/09/2021

Ultra-Low Power Keyword Spotting at the Edge

Keyword spotting (KWS) has become an indispensable part of many intellig...
research
10/23/2019

Sidebar: Scratchpad Based Communication Between CPUs and Accelerators

Hardware accelerators for neural networks have shown great promise for b...
research
08/31/2021

Deep Learning on Edge TPUs

Computing at the edge is important in remote settings, however, conventi...
research
05/22/2017

A Low-Power Accelerator for Deep Neural Networks with Enlarged Near-Zero Sparsity

It remains a challenge to run Deep Learning in devices with stringent po...
research
05/20/2016

Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks

Convolutional neural networks (CNN) have achieved major breakthroughs in...

Please sign up or login with your details

Forgot password? Click here to reset