Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

07/10/2019
by   Yue Wang, et al.
0

State-of-the-art convolutional neural networks (CNNs) yield record-breaking predictive performance, yet at the cost of high-energy-consumption inference, that prohibits their widely deployments in resource-constrained Internet of Things (IoT) applications. We propose a dual dynamic inference (DDI) framework that highlights the following aspects: 1) we integrate both input-dependent and resource-dependent dynamic inference mechanisms under a unified framework in order to fit the varying IoT resource requirements in practice. DDI is able to both constantly suppress unnecessary costs for easy samples, and to halt inference for all samples to meet hard resource constraints enforced; 2) we propose a flexible multi-grained learning to skip (MGL2S) approach for input-dependent inference which allows simultaneous layer-wise and channel-wise skipping; 3) we extend DDI to complex CNN backbones such as DenseNet and show that DDI can be applied towards optimizing any specific resource goals including inference latency or energy cost. Extensive experiments demonstrate the superior inference accuracy-resource trade-off achieved by DDI, as well as the flexibility to control such trade-offs compared to existing peer methods. Specifically, DDI can achieve up to 4 times computational savings with the same or even higher accuracy as compared to existing competitive baselines.

READ FULL TEXT

page 1

page 4

page 5

page 8

page 9

page 10

research
01/03/2020

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

While increasingly deep networks are still in general desired for achiev...
research
12/16/2018

Distill-Net: Application-Specific Distillation of Deep Convolutional Neural Networks for Resource-Constrained IoT Platforms

Many Internet-of-Things (IoT) applications demand fast and accurate unde...
research
08/24/2023

IPA: Inference Pipeline Adaptation to Achieve High Accuracy and Cost-Efficiency

Efficiently optimizing multi-model inference pipelines for fast, accurat...
research
04/21/2021

Measuring what Really Matters: Optimizing Neural Networks for TinyML

With the surge of inexpensive computational and memory resources, neural...
research
07/21/2023

Adaptive ResNet Architecture for Distributed Inference in Resource-Constrained IoT Systems

As deep neural networks continue to expand and become more complex, most...
research
01/29/2018

Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks

It is desirable to train convolutional networks (CNNs) to run more effic...
research
01/14/2022

Layerwise Geo-Distributed Computing between Cloud and IoT

In this paper, we propose a novel architecture for a deep learning syste...

Please sign up or login with your details

Forgot password? Click here to reset