MDInference: Balancing Inference Accuracy andLatency for Mobile Applications

02/16/2020
by   Samuel S. Ogden, et al.
0

Deep Neural Networks (DNNs) are allowing mobile devices to incorporate a wide range of features into user applications. However, the computational complexity of these models makes it difficult to run them efficiently on resource-constrained mobile devices. Prior work has begun to approach the problem of supporting deep learning in mobile applications by either decreasing execution latency or utilizing powerful cloud servers. These approaches only focus on single aspects of mobile inference and thus they often sacrifice other aspects. In this work we introduce a holistic approach to designing mobile deep inference frameworks. We first identify the key goals of accuracy and latency for mobile deep inference, and the conditions that must be met to achieve them. We demonstrate our holistic approach through the design of a hypothetical framework called MDInference. This framework leverages two complementary techniques; a model selection algorithm that chooses from a set of cloud-based deep learning models to improve accuracy and an on-device request duplication mechanism to bound latency. Through empirically-driven simulations we show that MDInference improves aggregate accuracy over static approaches by 40 incurring SLA violations. Additionally, we show that with SLA = 250ms, MDInference can increase the aggregate accuracy in 99.74 university networks and 96.84

READ FULL TEXT

page 1

page 7

page 8

research
02/16/2020

MDInference: Balancing Inference Accuracy and Latency for Mobile Applications

Deep Neural Networks (DNNs) are allowing mobile devices to incorporate a...
research
09/10/2019

Characterizing the Deep Neural Networks Inference Performance of Mobile Applications

Today's mobile applications are increasingly leveraging deep neural netw...
research
09/04/2019

ModiPick: SLA-aware Accuracy Optimization For Mobile Deep Inference

Mobile applications are increasingly leveraging complex deep learning mo...
research
01/14/2020

Run-time Deep Model Multiplexing

We propose a framework to design a light-weight neural multiplexer that ...
research
12/25/2018

JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

Recent years have witnessed a rapid growth of deep-network based service...
research
02/29/2020

A Note on Latency Variability of Deep Neural Networks for Mobile Inference

Running deep neural network (DNN) inference on mobile devices, i.e., mob...
research
07/22/2020

Characterization and Identification of Cloudified Mobile Network Performance Bottlenecks

This study is a first attempt to experimentally explore the range of per...

Please sign up or login with your details

Forgot password? Click here to reset