I-SPLIT: Deep Network Interpretability for Split Computing

09/23/2022
by   Federico Cunico, et al.
9

This work makes a substantial step in the field of split computing, i.e., how to split a deep neural network to host its early part on an embedded device and the rest on a server. So far, potential split locations have been identified exploiting uniquely architectural aspects, i.e., based on the layer sizes. Under this paradigm, the efficacy of the split in terms of accuracy can be evaluated only after having performed the split and retrained the entire pipeline, making an exhaustive evaluation of all the plausible splitting points prohibitive in terms of time. Here we show that not only the architecture of the layers does matter, but the importance of the neurons contained therein too. A neuron is important if its gradient with respect to the correct class decision is high. It follows that a split should be applied right after a layer with a high density of important neurons, in order to preserve the information flowing until then. Upon this idea, we propose Interpretable Split (I-SPLIT): a procedure that identifies the most suitable splitting points by providing a reliable prediction on how well this split will perform in terms of classification accuracy, beforehand of its effective implementation. As a further major contribution of I-SPLIT, we show that the best choice for the splitting point on a multiclass categorization problem depends also on which specific classes the network has to deal with. Exhaustive experiments have been carried out on two networks, VGG16 and ResNet-50, and three datasets, Tiny-Imagenet-200, notMNIST, and Chest X-Ray Pneumonia. The source code is available at https://github.com/vips4/I-Split.

READ FULL TEXT

page 1

page 3

research
12/20/2020

SPlit: An Optimal Method for Data Splitting

In this article we propose an optimal method referred to as SPlit for sp...
research
09/17/2023

SplitEE: Early Exit in Deep Neural Networks with Split Computing

Deep Neural Networks (DNNs) have drawn attention because of their outsta...
research
03/22/2023

Split-Et-Impera: A Framework for the Design of Distributed Deep Learning Applications

Many recent pattern recognition applications rely on complex distributed...
research
07/27/2020

Split Computing for Complex Object Detectors: Challenges and Preliminary Results

Following the trends of mobile and edge computing for DNN models, an int...
research
06/25/2019

Importance Estimation for Neural Network Pruning

Structural pruning of neural network parameters reduces computation, ene...
research
01/01/2018

Sanskrit Sandhi Splitting using seq2(seq)^2

In Sanskrit, small words (morphemes) are combined through a morphophonol...
research
10/06/2019

Splitting Steepest Descent for Growing Neural Architectures

We develop a progressive training approach for neural networks which ada...

Please sign up or login with your details

Forgot password? Click here to reset