Split-Et-Impera: A Framework for the Design of Distributed Deep Learning Applications

03/22/2023
by   Luigi Capogrosso, et al.
0

Many recent pattern recognition applications rely on complex distributed architectures in which sensing and computational nodes interact together through a communication network. Deep neural networks (DNNs) play an important role in this scenario, furnishing powerful decision mechanisms, at the price of a high computational effort. Consequently, powerful state-of-the-art DNNs are frequently split over various computational nodes, e.g., a first part stays on an embedded device and the rest on a server. Deciding where to split a DNN is a challenge in itself, making the design of deep learning applications even more complicated. Therefore, we propose Split-Et-Impera, a novel and practical framework that i) determines the set of the best-split points of a neural network based on deep network interpretability principles without performing a tedious try-and-test approach, ii) performs a communication-aware simulation for the rapid evaluation of different neural network rearrangements, and iii) suggests the best match between the quality of service requirements of the application and the performance in terms of accuracy and latency time.

READ FULL TEXT
research
11/16/2021

JMSNAS: Joint Model Split and Neural Architecture Search for Learning over Mobile Edge Networks

The main challenge to deploy deep neural network (DNN) over a mobile edg...
research
09/23/2022

I-SPLIT: Deep Network Interpretability for Split Computing

This work makes a substantial step in the field of split computing, i.e....
research
05/23/2022

Dynamic Split Computing for Efficient Deep Edge Intelligence

Deploying deep neural networks (DNNs) on IoT and mobile devices is a cha...
research
01/16/2020

The gap between theory and practice in function approximation with deep neural networks

Deep learning (DL) is transforming whole industries as complicated decis...
research
04/10/2022

SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems

We design deep neural networks (DNNs) and corresponding networks' splitt...
research
03/16/2019

swCaffe: a Parallel Framework for Accelerating Deep Learning Applications on Sunway TaihuLight

This paper reports our efforts on swCaffe, a highly efficient parallel f...

Please sign up or login with your details

Forgot password? Click here to reset