ScissionLite: Accelerating Distributed Deep Neural Networks Using Transfer Layer

05/05/2021
by   Hyunho Ahn, et al.
8

Industrial Internet of Things (IIoT) applications can benefit from leveraging edge computing. For example, applications underpinned by deep neural networks (DNN) models can be sliced and distributed across the IIoT device and the edge of the network for improving the overall performance of inference and for enhancing privacy of the input data, such as industrial product images. However, low network performance between IIoT devices and the edge is often a bottleneck. In this study, we develop ScissionLite, a holistic framework for accelerating distributed DNN inference using the Transfer Layer (TL). The TL is a traffic-aware layer inserted between the optimal slicing point of a DNN model slice in order to decrease the outbound network traffic without a significant accuracy drop. For the TL, we implement a new lightweight down/upsampling network for performance-limited IIoT devices. In ScissionLite, we develop ScissionTL, the Preprocessor, and the Offloader for end-to-end activities for deploying DNN slices with the TL. They decide the optimal slicing point of the DNN, prepare pre-trained DNN slices including the TL, and execute the DNN slices on an IIoT device and the edge. Employing the TL for the sliced DNN models has a negligible overhead. ScissionLite improves the inference latency by up to 16 and 2.8 times when compared to execution on the local device and an existing state-of-the-art model slicing approach respectively.

READ FULL TEXT

page 7

page 8

page 9

page 10

research
10/04/2019

Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing

As a key technology of enabling Artificial Intelligence (AI) application...
research
09/30/2022

Designing and Training of Lightweight Neural Networks on Edge Devices using Early Halting in Knowledge Distillation

Automated feature extraction capability and significant performance of D...
research
02/22/2023

DISCO: Distributed Inference with Sparse Communications

Deep neural networks (DNNs) have great potential to solve many real-worl...
research
06/15/2022

Understanding and Optimizing Deep Learning Cold-Start Latency on Edge Devices

DNNs are ubiquitous on edge devices nowadays. With its increasing import...
research
01/09/2022

An Adaptive Device-Edge Co-Inference Framework Based on Soft Actor-Critic

Recently, the applications of deep neural network (DNN) have been very p...
research
08/27/2022

RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems

Although Deep Neural Networks (DNN) have become the backbone technology ...
research
05/24/2022

Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Learning

Recently, deploying deep neural network (DNN) models via collaborative i...

Please sign up or login with your details

Forgot password? Click here to reset