Measuring what Really Matters: Optimizing Neural Networks for TinyML

04/21/2021
by   Lennart Heim, et al.
0

With the surge of inexpensive computational and memory resources, neural networks (NNs) have experienced an unprecedented growth in architectural and computational complexity. Introducing NNs to resource-constrained devices enables cost-efficient deployments, widespread availability, and the preservation of sensitive data. This work addresses the challenges of bringing Machine Learning to MCUs, where we focus on the ubiquitous ARM Cortex-M architecture. The detailed effects and trade-offs that optimization methods, software frameworks, and MCU hardware architecture have on key performance metrics such as inference latency and energy consumption have not been previously studied in depth for state-of-the-art frameworks such as TensorFlow Lite Micro. We find that empirical investigations which measure the perceptible metrics - performance as experienced by the user - are indispensable, as the impact of specialized instructions and layer types can be subtle. To this end, we propose an implementation-aware design as a cost-effective method for verification and benchmarking. Employing our developed toolchain, we demonstrate how existing NN deployments on resource-constrained devices can be improved by systematically optimizing NNs to their targeted application scenario.

READ FULL TEXT

page 14

page 15

page 16

research
08/31/2023

Dynamic nsNet2: Efficient Deep Noise Suppression with Early Exiting

Although deep learning has made strides in the field of deep noise suppr...
research
03/29/2018

Fine-Grained Energy and Performance Profiling framework for Deep Convolutional Neural Networks

There is a huge demand for on-device execution of deep learning algorith...
research
07/10/2019

Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

State-of-the-art convolutional neural networks (CNNs) yield record-break...
research
01/28/2022

Benchmarking Resource Usage for Efficient Distributed Deep Learning

Deep learning (DL) workflows demand an ever-increasing budget of compute...
research
04/23/2018

BrainSlug: Transparent Acceleration of Deep Learning Through Depth-First Parallelism

Neural network frameworks such as PyTorch and TensorFlow are the workhor...
research
09/12/2023

Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices

The recent surge of interest surrounding Multimodal Neural Networks (MM-...
research
04/09/2018

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

This work proposes an automated algorithm, called NetAdapt, that adapts ...

Please sign up or login with your details

Forgot password? Click here to reset