NEUKONFIG: Reducing Edge Service Downtime When Repartitioning DNNs

06/29/2021
by   Ayesha Abdul Majeed, et al.
0

Deep Neural Networks (DNNs) may be partitioned across the edge and the cloud to improve the performance efficiency of inference. DNN partitions are determined based on operational conditions such as network speed. When operational conditions change DNNs will need to be repartitioned to maintain the overall performance. However, repartitioning using existing approaches, such as Pause and Resume, will incur a service downtime on the edge. This paper presents the NEUKONFIG framework that identifies the service downtime incurred when repartitioning DNNs and proposes approaches for reducing edge service downtime. The proposed approaches are based on 'Dynamic Switching' in which, when the network speed changes and given an existing edge-cloud pipeline, a new edge-cloud pipeline is initialised with new DNN partitions. Incoming inference requests are switched to the new pipeline for processing data. Two dynamic switching scenarios are considered: when a second edge-cloud pipeline is always running and when a second pipeline is only initialised when the network speed changes. Experimental studies are carried out on a lab-based testbed to demonstrate that Dynamic Switching reduces the downtime by at least an order of magnitude when compared to a baseline using Pause and Resume that has a downtime of 6 seconds. A trade-off in the edge service downtime and memory required is noted. The Dynamic Switching approach that requires the same amount of memory as the baseline reduces the edge service downtime to 0.6 seconds and to less than 1 millisecond in the best case when twice the amount of memory as the baseline is available.

READ FULL TEXT

page 2

page 7

page 9

research
08/04/2020

A Case For Adaptive Deep Neural Networks in Edge Computing

Edge computing offers an additional layer of compute infrastructure clos...
research
01/15/2021

Dynamic DNN Decomposition for Lossless Synergistic Inference

Deep neural networks (DNNs) sustain high performance in today's data pro...
research
07/11/2022

Deep neural network based adaptive learning for switched systems

In this paper, we present a deep neural network based adaptive learning ...
research
08/08/2020

Scission: Context-aware and Performance-driven Edge-based Distributed Deep Neural Networks

Partitioning and distributing deep neural networks (DNNs) across end-dev...
research
09/13/2023

DNNShifter: An Efficient DNN Pruning System for Edge Computing

Deep neural networks (DNNs) underpin many machine learning applications....
research
11/03/2017

Accelerating Training of Deep Neural Networks via Sparse Edge Processing

We propose a reconfigurable hardware architecture for deep neural networ...
research
04/23/2018

Parallel and I/O-efficient Randomisation of Massive Networks using Global Curveball Trades

Graph randomisation is an important task in the analysis and synthesis o...

Please sign up or login with your details

Forgot password? Click here to reset