Quantization and Deployment of Deep Neural Networks on Microcontrollers

05/27/2021
by   Pierre-Emmanuel Novac, et al.
16

Embedding Artificial Intelligence onto low-power devices is a challenging task that has been partly overcome with recent advances in machine learning and hardware design. Presently, deep neural networks can be deployed on embedded targets to perform different tasks such as speech recognition,object detection or Human Activity Recognition. However, there is still room for optimization of deep neural networks onto embedded devices. These optimizations mainly address power consumption,memory and real-time constraints, but also an easier deployment at the edge. Moreover, there is still a need for a better understanding of what can be achieved for different use cases. This work focuses on quantization and deployment of deep neural networks onto low-power 32-bit microcontrollers. The quantization methods, relevant in the context of an embedded execution onto a microcontroller, are first outlined. Then, a new framework for end-to-end deep neural networks training, quantization and deployment is presented. This framework, called MicroAI, is designed as an alternative to existing inference engines (TensorFlow Lite for Microcontrollers and STM32Cube.AI). Our framework can indeed be easily adjusted and/or extended for specific use cases. Execution using single precision 32-bit floating-point as well as fixed-point on 8- and 16-bit integers are supported. The proposed quantization method is evaluated with three different datasets (UCI-HAR, Spoken MNIST and GTSRB). Finally, a comparison study between MicroAI and both existing embedded inference engines is provided in terms of memory and power efficiency. On-device evaluation is done using ARM Cortex-M4F-based microcontrollers (Ambiq Apollo3 and STM32L452RE).

READ FULL TEXT

page 6

page 10

page 11

page 14

page 19

page 20

page 31

page 33

research
10/07/2019

Bit Efficient Quantization for Deep Neural Networks

Quantization for deep neural networks have afforded models for edge devi...
research
11/14/2018

QUENN: QUantization Engine for low-power Neural Networks

Deep Learning is moving to edge devices, ushering in a new age of distri...
research
11/19/2019

CoopNet: Cooperative Convolutional Neural Network for Low-Power MCUs

Fixed-point quantization and binarization are two reduction methods adop...
research
08/29/2023

On-Device Learning with Binary Neural Networks

Existing Continual Learning (CL) solutions only partially address the co...
research
08/23/2022

Adaptation of MobileNetV2 for Face Detection on Ultra-Low Power Platform

Designing Deep Neural Networks (DNNs) running on edge hardware remains a...
research
09/12/2018

FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks

Convolutional Neural Networks have rapidly become the most successful ma...
research
09/02/2022

Human Activity Recognition on Microcontrollers with Quantized and Adaptive Deep Neural Networks

Human Activity Recognition (HAR) based on inertial data is an increasing...

Please sign up or login with your details

Forgot password? Click here to reset