Incremental Training and Group Convolution Pruning for Runtime DNN Performance Scaling on Heterogeneous Embedded Platforms

05/08/2021
by   Lei Xun, et al.
0

Inference for Deep Neural Networks is increasingly being executed locally on mobile and embedded platforms due to its advantages in latency, privacy and connectivity. Since modern System on Chips typically execute a combination of different and dynamic workloads concurrently, it is challenging to consistently meet inference time/energy budget at runtime because of the local computing resources available to the DNNs vary considerably. To address this challenge, a variety of dynamic DNNs were proposed. However, these works have significant memory overhead, limited runtime recoverable compression rate and narrow dynamic ranges of performance scaling. In this paper, we present a dynamic DNN using incremental training and group convolution pruning. The channels of the DNN convolution layer are divided into groups, which are then trained incrementally. At runtime, following groups can be pruned for inference time/energy reduction or added back for accuracy recovery without model retraining. In addition, we combine task mapping and Dynamic Voltage Frequency Scaling (DVFS) with our dynamic DNN to deliver finer trade-off between accuracy and time/power/energy over a wider dynamic range. We illustrate the approach by modifying AlexNet for the CIFAR10 image dataset and evaluate our work on two heterogeneous hardware platforms: Odroid XU3 (ARM big.LITTLE CPUs) and Nvidia Jetson Nano (CPU and GPU). Compared to the existing works, our approach can provide up to 2.36x (energy) and 2.73x (time) wider dynamic range with a 2.4x smaller memory footprint at the same compression rate. It achieved 10.6x (energy) and 41.6x (time) wider dynamic range by combining with task mapping and DVFS.

READ FULL TEXT

page 1

page 3

research
05/17/2022

Dynamic DNNs Meet Runtime Resource Management on Mobile and Embedded Platforms

Deep neural network (DNN) inference is increasingly being executed on mo...
research
05/08/2021

Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms

Mobile and embedded platforms are increasingly required to efficiently e...
research
05/08/2021

Optimising Resource Management for Embedded Machine Learning

Machine learning inference is increasingly being executed locally on mob...
research
10/31/2019

ALERT: Accurate Learning for Energy and Timeliness

An increasing number of software applications incorporate runtime Deep N...
research
12/09/2022

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

During the deployment of deep neural networks (DNNs) on edge devices, ma...
research
10/31/2019

ALERT: Accurate Anytime Learning for Energy and Timeliness

An increasing number of software applications incorporate runtime Deep N...
research
01/28/2021

AdaSpring: Context-adaptive and Runtime-evolutionary Deep Model Compression for Mobile Applications

There are many deep learning (e.g., DNN) powered mobile and wearable app...

Please sign up or login with your details

Forgot password? Click here to reset