JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

12/25/2018
by   Hongshan Li, et al.
0

Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.

READ FULL TEXT

page 1

page 3

research
07/03/2020

CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge

The success of deep neural networks (DNN) in machine perception applicat...
research
02/16/2020

MDInference: Balancing Inference Accuracy and Latency for Mobile Applications

Deep Neural Networks (DNNs) are allowing mobile devices to incorporate a...
research
02/16/2020

MDInference: Balancing Inference Accuracy andLatency for Mobile Applications

Deep Neural Networks (DNNs) are allowing mobile devices to incorporate a...
research
08/22/2023

Practical Insights on Incremental Learning of New Human Physical Activity on the Edge

Edge Machine Learning (Edge ML), which shifts computational intelligence...
research
04/20/2021

DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device

Recently, there has been an explosive growth of mobile and embedded appl...
research
07/12/2023

DeepMapping: The Case for Learned Data Mapping for Compression and Efficient Query Processing

Storing tabular data in a way that balances storage and query efficienci...
research
05/06/2021

Towards Inference Delivery Networks: Distributing Machine Learning with Optimality Guarantees

We present the novel idea of inference delivery networks (IDN), networks...

Please sign up or login with your details

Forgot password? Click here to reset