HCM: Hardware-Aware Complexity Metric for Neural Network Architectures

04/19/2020
by   Alex Karbachevsky, et al.
0

Convolutional Neural Networks (CNNs) have become common in many fields including computer vision, speech recognition, and natural language processing. Although CNN hardware accelerators are already included as part of many SoC architectures, the task of achieving high accuracy on resource-restricted devices is still considered challenging, mainly due to the vast number of design parameters that need to be balanced to achieve an efficient solution. Quantization techniques, when applied to the network parameters, lead to a reduction of power and area and may also change the ratio between communication and computation. As a result, some algorithmic solutions may suffer from lack of memory bandwidth or computational resources and fail to achieve the expected performance due to hardware constraints. Thus, the system designer and the micro-architect need to understand at early development stages the impact of their high-level decisions (e.g., the architecture of the CNN and the amount of bits used to represent its parameters) on the final product (e.g., the expected power saving, area, and accuracy). Unfortunately, existing tools fall short of supporting such decisions. This paper introduces a hardware-aware complexity metric that aims to assist the system designer of the neural network architectures, through the entire project lifetime (especially at its early stages) by predicting the impact of architectural and micro-architectural decisions on the final product. We demonstrate how the proposed metric can help evaluate different design alternatives of neural network models on resource-restricted devices such as real-time embedded systems, and to avoid making design mistakes at early stages.

READ FULL TEXT
research
11/20/2017

Hello Edge: Keyword Spotting on Microcontrollers

Keyword spotting (KWS) is a critical component for enabling speech based...
research
02/01/2023

EfficientRep:An Efficient Repvgg-style ConvNets with Hardware-aware Neural Network Design

We present a hardware-efficient architecture of convolutional neural net...
research
11/21/2018

HAQ: Hardware-Aware Automated Quantization

Model quantization is a widely used technique to compress and accelerate...
research
04/17/2023

ATHEENA: A Toolflow for Hardware Early-Exit Network Automation

The continued need for improvements in accuracy, throughput, and efficie...
research
05/22/2019

NTP : A Neural Network Topology Profiler

Performance of end-to-end neural networks on a given hardware platform i...
research
06/13/2021

FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack

Convolutional Neural Networks (CNN) have shown impressive performance in...
research
08/10/2020

HAPI: Hardware-Aware Progressive Inference

Convolutional neural networks (CNNs) have recently become the state-of-t...

Please sign up or login with your details

Forgot password? Click here to reset