Differentiable Network Pruning for Microcontrollers

10/15/2021
by   Edgar Liberis, et al.
0

Embedded and personal IoT devices are powered by microcontroller units (MCUs), whose extreme resource scarcity is a major obstacle for applications relying on on-device deep learning inference. Orders of magnitude less storage, memory and computational capacity, compared to what is typically required to execute neural networks, impose strict structural constraints on the network architecture and call for specialist model compression methodology. In this work, we present a differentiable structured network pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter importance feedback to obtain highly compressed yet accurate classification models. Our methodology (a) improves key resource usage of models up to 80x; (b) prunes iteratively while a model is trained, resulting in little to no overhead or even improved training time; (c) produces compressed models with matching or improved resource usage up to 1.7x in less time compared to prior MCU-specific methods. Compressed models are available for download.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2022

Pex: Memory-efficient Microcontroller Deep Learning through Partial Execution

Embedded and IoT devices, largely powered by microcontroller units (MCUs...
research
10/27/2020

μNAS: Constrained Neural Architecture Search for Microcontrollers

IoT devices are powered by microcontroller units (MCUs) which are extrem...
research
07/17/2023

Differentiable Transportation Pruning

Deep learning algorithms are increasingly employed at the edge. However,...
research
12/05/2017

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the curr...
research
02/04/2020

Lightweight Convolutional Representations for On-Device Natural Language Processing

The increasing computational and memory complexities of deep neural netw...
research
05/13/2022

Structural Dropout for Model Width Compression

Existing ML models are known to be highly over-parametrized, and use sig...
research
10/02/2020

GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning

Embedded systems demand on-device processing of data using Neural Networ...

Please sign up or login with your details

Forgot password? Click here to reset