Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks

07/31/2020 ∙ by Yoshitomo Matsubara, et al. ∙ University of California, Irvine 10

The edge computing paradigm places compute-capable devices - edge servers - at the network edge to assist mobile devices in executing data analysis tasks. Intuitively, offloading compute-intense tasks to edge servers can reduce their execution time. However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading. Herein, we focus on edge computing supporting remote object detection by means of Deep Neural Networks (DNNs), and develop a framework to reduce the amount of data transmitted over the wireless link. The core idea we propose builds on recent approaches splitting DNNs into sections - namely head and tail models - executed by the mobile device and edge server, respectively. The wireless link, then, is used to transport the output of the last layer of the head model to the edge server, instead of the DNN input. Most prior work focuses on classification tasks and leaves the DNN structure unaltered. Herein, our focus is on DNNs for three different object detection tasks, which present a much more convoluted structure, and modify the architecture of the network to: (i) achieve in-network compression by introducing a bottleneck layer in the early layers on the head model, and (ii) prefilter pictures that do not contain objects of interest using a convolutional neural network. Results show that the proposed technique represents an effective intermediate option between local and edge computing in a parameter region where these extreme point solutions fail to provide satisfactory performance. We release the code and trained models at https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors .

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 10

Code Repositories

torchdistill

A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆20 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.


view repo

hnd-ghnd-object-detectors

[ICPR 2020] "Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks" and [MobiCom EMDL 2020] "Split Computing for Complex Object Detectors: Challenges and Preliminary Results"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The real-time execution of modern data analysis algorithms requires powerful computing platforms. For instance, accurate Deep Neural Networks (DNNs) for vision tasks, e.g., image classification and object detection, have an extremely large number of layers and parameters. Due to weight, cost or size constraints, mobile devices may have limited computing capacity and energy availability, which makes the execution of such algorithms challenging. The research community is exploring two main approaches to mitigate this problem: simplifying models and computation offloading. The former approach may lead to a degraded performance of the resulting models compared to the full sized ones. The latter approach is often referred to as mobile edge computing [2] which has been proposed primarily as a tool to support time-sensitive mission-critical applications such as autonomous vehicles and augmented reality [24].

However, poor channel conditions increase the time needed to deliver the input data to the edge server, thus making edge computing schemes not effective. Compression, by reducing the channel capacity needed to sustain the data transfer, can mitigate this problem and reduce the total delay. Unfortunately, while lossless compression is either computationally expensive and/or achieves limited compression gain [26], traditional lossy compression mechanisms, such as JPEG, are mainly designed for human perception. As a result, aggressive compression often degrades the final accuracy of analysis [12, 37]

. Modern compression strategies, such as autoencoders 

[35], are computationally expensive, especially when considering information-rich signals such as images and videos.

Recent contributions [19, 21, 34] attempt to distribute the execution of DNN models across the mobile device-edge server system to find delay-optimal “splitting” points. More specifically, the DNN model is split into a head and tail

model, which are executed at the mobile device and edge server, respectively. Instead of the overall model input, then, the mobile device transmits over the channel the output of the head layer (a tensor). Unfortunately, most of the models, and especially those for vision tasks, tend to amplify the input in the first layers, meaning that positioning the splitting point in the first part of the DNN would result in a larger amount of data transmitted over the wireless link compared to “pure” offloading 

[26]. The use of later, smaller, layers as splitting point would reduce the amount of transferred data, but would also result in a large portion of the overall computing load being allocated to the weaker device in the system – the mobile device. In fact, results in [19] show that in most of the analyzed models and channel/computing settings the optimal splitting point is either at the very beginning or end of the model, that is, pure local computing or edge computing have better performance compared to naive splitting.

In this paper, we explore strategies to modify the structure of Convolutional Neural Network (CNN) models for object detection to maximize the efficiency of model splitting. The core idea is to use network distillation to introduce a bottleneck layer, that is, a layer with a small number of nodes, in the early stages of the object detection model. The model is then split at the bottleneck, thus achieving in-network compression, as a smaller tensor needs to be transmitted to the edge server.

We show that the distillation technique we propose allows the injection of extremely effective bottlenecks while modifying only the parameters of the head portion of the model, which is trained to mimic the output of the original head. We build on the results presented in [26], where we applied a similar concept to image classification tasks. The more convoluted structure of object detectors, where the output of intermediate layers is propagated to a final detection module, introduces further challenges discussed in [27]

that we address by proposing a generalized loss function guiding the distillation process and applying a bottleneck quantization technique.

Additionally, we observe that while in image classification tasks all images are classified, in object detection tasks only a fraction of the images may contain objects of interest. To take advantage of this feature of the problem, we embed in the head model a small CNN whose binary output indicates whether or not the image contains objects to be analyzed by the detector. The network acts as a filter, blocking empty images while promptly returning an empty detection output, thus also reducing channel usage and server load. We demonstrate that this classifier can be efficiently attached to the first layers of the head model minimizing additional complexity by reusing a portion of the detector.

Results show that our distillation based technique achieves detection performance, measured in terms of mean Average Precision (mAP), comparable to that of state-of-the-art models. In the evaluation, we use established datasets associated with complex detection tasks e.g., person keypoints detection [3]. We demonstrate that we can significantly reduce the total time from image capture to the availability of the detector’s output compared with local and edge computing in a region of parameters where these extreme-point options struggle to provide satisfactory performance. For ensuring reproducibility of our experimental results, we publish all the code and model weights.111https://github.com/yoshitomo-matsubara/hnd-ghnd-object-detectors The framework will also help researchers shortly try their approaches to this challenging task.

Ii Edge-Assisted Object Detection

We consider a scenario where a mobile device acquires images to be analyzed in real-time to support delay-sensitive vision-based applications. An edge server interconnected to the mobile device through a capacity-limited wireless channel (e.g., data rate Mbps) assists the execution of the analysis algorithm. As stated earlier, in this paper we focus on object detection tasks. An example of relevant application and framework is presented in [24], where the authors focus on an edge-assisted implementation of augmented reality.

Our main objective is to reduce the time from image capture to the availability of the object detection output (in this case bounding boxes and associated labels) at the mobile device. We refer to this time as capture-to-output delay and denote it with . Intuitively, in settings such as that proposed in [24], a smaller capture-to-output delay would improve tracking performance. We define in the following the composition of in three settings: (i) local computing at the mobile device; (ii) offloading of the complete model execution to the edge server (referred to as pure offloading); and (iii) split computing, where the execution of the model is divided between the mobile device and the edge server.

(i) Local Computing: The total time is the time needed to execute the entire object detection model at the mobile device. Thus, , where is determined by the model complexity and computing power of the mobile device.

(ii) Pure Offloading: The total time is the sum of two terms: is the time needed to transfer the image to the edge server, and is the execution time of the entire model at the edge server.

(iii) Split Computing: The total time is the sum of three terms: is the time needed to execute the head model at the mobile device, is the time needed to transfer the output of the head model to the edge server, and is the execution time of the tail model at the edge server.

Rather intuitively, the absolute and relative computing capacity of the mobile device and edge server and the channel capacity determine the value of the delay components listed above, and thus, which option is the most advantageous in terms of inference time. Clearly, a large channel capacity would decrease communication-related delay components, eventually making pure offloading a preferable option, and increasingly reducing the need for compression. A small gap in terms of computing capacity between the mobile device and the edge server would make offloading and network splitting effective only in scenarios with high channel capacity.

We emphasize that none of the above options can be declared best for all parameters. Instead, in different parameter regions different options will lead to the smallest overall capture-to-output delay. The technique proposed here realizes an intermediate option between local and edge computing where a small portion of the computing load is allocated to the local device to reduce the amount of transferred data. Clearly, this option is functional in the lower-intermediate range of channel capacity, and when the gap between the mobile device and edge server is not making the allocation of even small computing loads to the mobile device undesirable.

We remark that the latter characteristics is compatible with the scenarios created by the recent advances in miniaturization, which are bringing considerable computing power within the reach of mobile devices. Examples include the NVIDIA Jetson Nano and TX2 embedded computers. However, as illustrated in our results, even relatively low-end full-sized servers can significantly reduce execution time, thus making offloading meaningful.

Iii CNN-Based Detection Models

In order to better illustrate our proposed approach and the associated challenges, we first discuss the structure of state-of-the-art object detectors based on deep neural networks. CNN-based models have become the mainstream option for object detection tasks [10, 11, 30]. These complex object detection models are categorized into two main classes: single-stage or two-stage object detectors. Single-stage models [30, 25] are designed to directly predict bounding boxes and the corresponding object classes given an image. Conversely, two-stage models [31, 13] generate region proposals as output of the first stage, which are then classified in the second stage. In general, single-stage detection models have lower overall complexity, and thus execution time, compared to two-stage models. However, two-stage models outperforms single-stage ones in terms of detection performance.

Iii-a Benchmark Models

In this study, we focus our attention on state-of-the-art two-stage models in resource-constrained edge computing systems. Specifically, we consider Faster R-CNN, Mask R-CNN, and Keypoint R-CNN (from torchvision) with ResNet-50 FPN [31, 13, 29] pretrained on the COCO 2017 datasets. Due to the limited space, more details of the models and tasks are provided in supplementary material.

Iii-B Motivations

Clearly, mobile devices are often unable to execute these strong, but rather complex and convoluted, detectors. In order to obtain models within the reach of weak computing platforms, the research community developed an approach known as knowledge distillation [15]. The core idea of knowledge distillation is to build a lower complexity, “student” model, trained to mimic the output of a pretrained, higher-complexity “teacher” model. The key assumption is that large – teacher – models are often overparameterized, and can be reduced without a significant performance loss. Interestingly, it is widely known that student models trained to mimic the behavior of their teacher models significantly outperform those trained on the original training dataset [1].

Model \Backbone RN-18 RN-34 RN-50 RN-101
Faster R-CNN 0.617 0.743 0.958 1.26
Mask R-CNN 0.645 0.784 0.956 1.27
Keypoint R-CNN 1.93 2.09 2.25 2.59
TABLE II: Pure offloading time [sec] (data rate: 5Mbps) of detection models with different ResNet backbones on a high-end machine with a NVIDIA GeForce RTX 2080 Ti.
Model \Backbone RN-18 RN-34 RN-50 RN-101
Faster R-CNN 0.456 0.462 0.472 0.489
Mask R-CNN 0.458 0.4832 0.4904 0.4897
Keypoint R-CNN 0.469 0.473 0.481 0.498
TABLE I: Local computing time [sec] of detection models with different ResNet backbones on NVIDIA Jetson TX2.

Distillation has been applied to detection models [22, 4, 36]. For instance, Chen et al. [5] propose a hierarchical knowledge distillation technique to train a lightweight pedestrian detector, and distill a student ResNet-18 based R-CNN model using a ResNet-50 based R-CNN model as teacher. However, as shown in Table II, the reduction in inference time granted by smaller models is limited when using relatively capable platforms such as the NVIDIA Jetson TX2, which embeds a GPU. Importantly, we remark that smaller models achieve degraded detection performance compared to bigger models due to the complexity of the detection task.

Table II shows the total time achieved by pure offloading when using the same models. In these results, the execution time is computed using a high-end desktop computer with Intel Core i7-6700K CPU (4.00GHz), 32GB RAM, and a NVIDIA GeForce RTX 2080 Ti as edge server, and the channel provides the relatively low, data rate of Mbps to the image stream. It can be seen how in this setting reducing the model size by distilling the whole detector [22, 4, 36] does not lead to substantial total delay savings, while offloading is generally advantageous compared to local computing.

Splitting the model that leaves its structure unaltered, as proposed by Kang et al. [19], is technically challenging and intuitively not advantageous due to the data amplification effect of early layers, and the need to forward the output of intermediate backbone layers (see Fig. 1) when positioning the splitting point later in the model.

Iv In-Network Neural Compression

Iv-a Background

As discussed earlier, the weak point of pure edge computing is the communication delay: when the capacity of the channel interconnecting the mobile device and edge server is degraded by a poor propagation environment, mobility, interference and/or traffic load, transferring the model input to the edge server may result in a large overall capture-to-output delay . Thus, in challenged channel conditions, making edge computing effective necessitates strategies to reduce the amount of data to be transported over the channel.

The approach we take herein is to modify the structure of the model to obtain in-network compression and improve the efficiency of network splitting. We remind that in network splitting, the output of the last layer of the head model is transferred to the edge, instead of the model input. Compression, then, corresponds to splitting the model at layers with a small number of nodes which generate small outputs to be transmitted over the channel. In our approach, the splitting point coincides with the layer where compression is achieved.

Unfortunately, layers with a small number of nodes appear only in advanced portions of object detectors, while early layers amplify the input to extract features. However, splitting at late layers would position most of the computational complexity at the weaker mobile device. This issue was recently discussed in [26, 27] for image classification and object detection models, reinforcing the results obtained in [19] on traditional network splitting.

In [26], we proposed to inject bottleneck layers, that is, layers with a small number of nodes, in the early stages of image classification models. To reduce accuracy loss as well as computational load at the mobile device, the whole head section of the model is reduced using distillation. The resulting small model contains a bottleneck layer followed by a few layers that translate the bottleneck layer’s output to the output of the original head model. Note that the layers following the bottleneck layers are then attached to the original tail model and executed at the edge server.

The distillation process attempts to make the output of the new head model as close as possible to the original head. At an intuitive level, when introducing bottleneck layers this approach is roughly equivalent to train a small asymmetric autoencoder-decoder pipeline whose boundary layer produces a compressed version of the input image used by the decoder to reconstruct the output of the head section, rather than the image. Interestingly, it is shown that the distillation approach can achieve high compression rates while preserving classification performance even is complex classification tasks.

This paper builds on this approach [26, 27] to obtain in-network compression with further improved detection performance in object detection tasks. Specifically, we generalize the head network distillation technique, and apply it to the state of the art detection models described in the previous section (Faster R-CNN, Mask R-CNN, and Keypoint R-CNN).

The key challenge originates from the structural differences between the models for these two classes of vision tasks. As discussed in the previous section, although image classification models are used as backbones of detection models, there is a trend of using outputs of multiple intermediate layers as features fed to modules designed for detection such as the FPN (Feature Pyramid Network) [23]. This makes the distillation of head models difficult, as they would need to embed multiple bottleneck layers at the points in the head network whose output is forwarded to the detectors. Clearly, the amount of data transmitted over the network would be inevitably larger as multiple outputs would need to be transferred [27].

To overcome this issue, we redefine here the head distillation technique to (i) introduce the bottleneck at the very early layers of the network, and (ii) refine the loss function used to distill the mimicking head model to account for the loss induced on forwarded intermediate layers’ outputs. We remark that in network distillation (see Fig. 1) applied to head-tail split models, the head portion of the model (red) is distilled introducing a bottleneck layer, and the original teacher’s architecture and parameters for the tail section (green) are reused without modification. We note that this allows fast training, as only a small portion of the whole model is retrained.

Iv-B Bottleneck Positioning and Head Structure

Fig. 1: Generalized head network distillation for R-CNN object detectors. Green modules correspond to frozen blocks of individual layers of/from the teacher model, and red modules correspond to blocks we design and train for the student model. L0-4 indicate high-level layers in the backbone. In this work, only backbone modules (orange) are used for training.

As shown in Fig. 1, the output of early blocks of the backbone are forwarded to the detectors. In order to avoid the need to introduce the bottlenecks in multiple sections and transmit their output, we introduce the bottleneck within the L1 module of the model, whose output is the first to be forwarded to FPN. Compared to the framework developed in [26], this has two main implications. Firstly, the aggregate complexity of the head model is fairly small, and we do not need to significantly reduce its size to minimize computing load at the mobile device. Secondly, in these first layers the extracted features are not yet well defined, and devising an effective structure for the bottleneck is challenging.

Figure 1 summarizes the architecture. The difference between the overall teacher and student models are the high-level layers 0 and 1 (L0 and L1), while the rest of the architecture and their parameters is left unaltered. The architecture of L0 in the student models is also identical to that in the teacher models, but their parameters are retrained during the distillation process. The L1 in student models is designed to have the same output shape as the L1 in teacher models, while we introduce a bottleneck layer within the module.

The architectures of layer 1 in teacher and student models are summarized in our supplementary material. The architecture will be used in all the considered object detection models: Faster, Mask, and Keypoint R-CNNs. Our introduced bottleneck point is designed to output a tensor whose size is approximately % of the input one. Specifically, we introduce the bottleneck point by using an aggressively small number of output channels in the convolution layer and amplifying the output with the following layers. As we design the student’s layers, tuning the number of channels in the convolution layer is a key for our bottleneck injection.

The main reason we consider the number of channels as a key hyperparameter is that different from CNNs for image classification, the input and output tensor shapes of the detection models, including their intermediate layers, are not fixed 

[31, 13, 29]. Thus, it would be difficult to have the output shapes of student model match those of teacher model, that must be met for computing loss values in distillation process described later. For such models, other hyperparameters such as kernel size

, padding

, and stride

cannot be changed aggressively while keeping comparable detection performance since they change the output patch size in each channel, and some input elements may be ignored depending on their hyperparameter values.

Model with FPN Faster R-CNN Mask R-CNN Keypoint R-CNN
Approach \ Metrics BBox mAP BBox mAP Mask mAP BBox mAP Keypoints mAP
Pretrained (Teacher) 0.370 0.379 0.346 0.546 0.650
Head network distillation [26, 27] 0.339 0.350 0.319 0.488 0.579
Ours 0.358 0.370 0.337 0.532 0.634
Ours with BQ (16 bits) 0.358 0.370 0.336 0.532 0.634
Ours with BQ (8 bits) 0.355 0.369 0.336 0.530 0.628

* Test datasets for object and keypoint detection tasks are not publicly available. https://github.com/pytorch/vision/releases/tag/v0.3.0

TABLE III: Performance of pretrained and head-distilled models on COCO 2017 validation datasets* for different tasks.

Iv-C Loss Function

In head network distillation initially applied to image classification [26], the loss function used to train the student model is defined as

(1)

where and are teacher and student functions of input data in a batch . The loss function, thus, is simply the sum of squared errors (SSE) between the outputs of last student and teacher layers, and the student model is trained to minimize the loss. This simple approach produced good results in image classification models.

Due to the convoluted structure of object detection models, the design of the loss function needs to be revisited in order to build effective head models. As described earlier, the output of multiple intermediate layers in the backbone are used as features to detect objects. As a consequence, the “mimicking loss” at the end of L1 in the student model will be inevitably propagated as tensors are fed forward, and the accumulated loss may degrade the overall detection performance for compressed data size [27].

For this reason, we reformulate the loss function as follows:

(2)

where is loss index, is a scale factor (hyperparameter) associated with loss , and and indicate the corresponding subset of teacher and student models (functions of input data ) respectively. The total loss, then, is the sum of weighted losses. Following Eq. (2), the previously proposed head network distillation technique [26] can be seen as a special case of our proposed technique.

Iv-D Detection Performance Evaluation

As we modify the structure and parameters of state-of-the-art models to achieve an effective splitting, we need to evaluate the resulting object detection performance. In the following experiments, we use the same distillation configurations for both the original and our generalized head network distillation techniques. Distillations are performed using the COCO 2017 training datasets and the following hyperparameters. Student models are trained for 20 epochs, and batch size is 4. The models’ parameters are optimized using Adam 

[20] with an initial learning rate of , which is decreased by a factor at the 5th and 15th epochs for Faster and Mask R-CNNs. The number of training samples in the person keypoint dataset is smaller than that in object detection dataset, thus we train Keypoint R-CNN student models for 35 epochs and decrease the learning rate by a factor of 0.1 at the 9th and 27th epochs.

When using the original head network distillation proposed in [26], the sum of squared error loss is minimized (Eq. (1)) using the outputs of the high-level layer 1 (L1) of the teacher and student models. In the head network distillation for object detection we propose, we minimize the sum of squared error losses in Eq. (2) using the output of the high-level layers 1–4 (L1–4) with scale factors . Note that in both the cases, we update only the parameters of the layers 0 and 1, and those of the layers 2, 3 and 4 are fixed. Quite interestingly, the detection performance degraded when we attempted to update the parameters of layers to in our preliminary experiments.

As performance metric, we use mAP (mean average precision, averaged) that is averaged over IoU (Intersection-over-Union) thresholds in object detection boxes (BBox), instance segmentation (Mask) and keypoint detection tasks.

Table III reports the detection performance of teacher models, and distilled models by the original and our generalized head network distillation technique. In all the three considered object detectors, the use of our proposed loss function significantly improves mAP compared to models distilled using the original head network distillation. Clearly, the introduction of the bottleneck, and corresponding compression of the output of that section of the network, induces some performance degradation with respect to the original teacher model.

Iv-E Bottleneck Quantization (BQ)

Using our generalized head network distillation technique, we introduce a small bottleneck within the student detection model. Remarkably, the bottleneck output is approximately %, compared to input tensor. However, compared to the input JPEG, rather than its tensor representation [27], the compression gain is still not satisfactory (see the Bottleneck column in Table IV

). To achieve more aggressive compression gains, we quantize the bottleneck output. Quantization techniques for deep learning 

[21, 17] have been recently proposed to compress models, reducing the amount of memory used to store them. Here, we instead use quantization to compress the output of one layer only. Specifically, by representing 32-bit floating-point tensors with 16- or 8-bit.

To use 16-bit representations, we can simply cast bottleneck tensors (32-bit by default) to 16-bit, but the data size ratio is still above 1 in Table IV, that means there would be no gain of inference time as it take longer to deliver the data to the edge server compared to pure offloading. Thus, we apply the quantization technique in [17] to represent tensors with 8-bit integers and one 32-bit floating-point value. Note that quantization is applied after distillation to simplify training.

Input Bottleneck Quantized Bottleneck
(JPEG) 32 bits 16 bits 8 bits
Data size 1.00 2.56 1.28 0.643
Tensor shape 1.00 0.0657 0.0657 0.0657
TABLE IV: Ratios of bottleneck data size and tensor shape in head portion to input data.

Inevitably, quantization will result in some information loss, which may affect the detection performance of the distilled models. Quite interestingly, our results indicate that there is no significant detection performance loss for most of the configurations in Table III, while we achieve a considerable reduction in terms of data size as shown in Table IV. Therefore, in Section VI we report results using 8-bit quantization.

V Neural Image Prefiltering

In this section, we exploit a semantic difference between image classification and object detection tasks to reduce resource usage. While every image is used for inference, only a subset of images produced by the mobile device contain an object within the overall set of detected classes. Intuitively, the execution of the object detection module is useful only if an object of interest is present in the vision range. Figures 2(c) and 2(d) are examples of pictures without objects of interest for Keypoint R-CNN, as this model is trained to detect people and simultaneously locate their keypoints. We attempt then, to filter out the empty images before they are transmitted over the channel and processed by the edge server.

To this aim, we embed in the early layers of the overall object detection model a classifier whose output indicates whether or not the picture is empty. We refer to this classifier as neural filter. Importantly, this additional capability impacts several metrics: (i) reduced total inference time, as the early decision as an empty image is equivalent to the detector’s output; (ii) reduced channel usage, as empty images are eliminated in the head model; (iii) reduced server load, as the tail model is not executed when the image is filtered out.

Clearly, the challenge is developing a low complexity, but accurate classifier. In fact, a complex classifier would increase the execution time of the head portion at the mobile device, possibly offsetting the benefit of producing early empty detection. On the other hand, an inaccurate classifier would either decrease the overall detection performance filtering out non-empty pictures, or failing to provide its full potential benefit by propagating empty pictures.

Importantly powerful CNN-based object detection models are much more complex compared to models for image classification due to inherent difficulty of the former task: not only object label(s), but also coordinate(s) (such as bounding box regression). Therefore, it is possible to develop accurate compact classification models with a limited impact on the overall complexity of a full-sized object detector.

Fig. 2: Neural filter (blue) to filter images with no object of interest. Only neural filter’s parameters can be updated.

In the structure we developed in the previous section, we have the additional challenge that the neural filter will need to be attached to the head model, which only contains early layers of the overall detection model (see Fig. 2). Note that parameters of the distilled model are fixed, and only the neural filter (blue module in Fig. 2) is trained. Specifically, as input to the neural filter we use the output of layer L0, the first section of the backbone network. This allows us to reuse layers that are executed in case the picture contains objects to support detection. Importantly, the L0 layer performs an amplification of the input data [14]. Therefore, using L0 for both of the head model and the neural filter is efficient and effective.

(a) 2 persons
(b) 1 person
(c) No person
(d) No person
Fig. 3: Sample images in COCO 2017 training dataset.

In this study, we introduce a neural filter to a distilled Keypoint R-CNN model as illustrated in Fig. 2. Approximately 46% of images in the COCO 2017 person keypoint dataset have no object of interest. Figures 2(c) and 2(d) are sample images we would like to filter out. The design of the neural filter is reported in our supplementary material, and we train the model for epochs. During training, each image is labeled as “positive” if it contains at least one valid object, and as “negative“ otherwise. The batch size is set at , and we use cross entropy loss to optimize model’s parameters by SGD with an initial learning rate , momentum , and weight decay . We decrease the learning rate by a factor of 0.1 at the 15th and 25th epochs. The resulting neural filter achieved ROC-AUC on the validation dataset.

The output values of the neural filter are softmaxed i.e., . In order to preserve the performance of the distilled Keypoint R-CNN model when using the neural filter, we set a small threshold for prefiltering to obtain a high recall, while images without objects of interest are prefiltered only when the neural filter is negatively confident. Specifically, we set the classification threshold to , that is, we filter out images with prediction score smaller than . The BBox and Keypoint mAPs of distilled Keypoint R-CNN with bottleneck (8-bit quantization) and neural filter are and respectively. Thus, the neural filter only slightly degrades detection performance. However, as shown in the next section, suffering such small degradation results in a perceivable reduction of the total inference time in the considered datasets.

Vi Latency Evaluation

In this section, we evaluate the total time of complete capture-to-output pipelines. Following Section III-B, we use the NVIDIA Jetson TX2 as mobile device, and the same high-end desktop computer with a NVIDIA GeForce RTX 2080 Ti as edge server. This configuration corresponds to a scenario with relatively capable devices. Clearly, scenarios with weaker mobile devices and edge servers will see a reduced relative weight of the communication component of the total delay, thus possibly advantaging our technique compared to pure offloading. On the other hand, a strongly asymmetric system, where the mobile device has a considerably smaller computing capacity compared to the edge server will penalize the execution of even small computing tasks at the mobile device, as prescribed by our approach.

We compare three different configurations: local computing, pure offloading, and split computing using network distillation. Here, we do not consider naive splitting approaches such as Neurosurgeon [19] as the original benchmark models in this study do not have any small bottleneck point at their early stage, where their best splitting point would result in either input or output layers. i.e., pure offloading or local computing. But, we do consider the data rates for vision task in [19], and set it below Mbps. Note that all the R-CNN models used in this study are designed to have an input image whose shorter side has pixels. In pure offloading, we compute the file size in bytes of the resized JPEG images to be transmitted to the edge server. The average size of resized images in COCO 2017 validation dataset is . In the split configuration, we compute the data size to be transferred as the quantized output of the bottleneck layer. The communication delay is then computed dividing data size by the available data rate.

First, we show the gain of the proposed technique with respect to local computing and pure offloading as a function of the available data rate in Figures 3(a) and 3(b), respectively. The gain is defined as the capture-to-output delay of local computing/pure offloading and the split computing configuration. As expected, local computing is the best option (gain smaller than one) when the available data rate is small. Depending on the specific model, the threshold is hit in the range Mbps. The gain then grows up to (Faster and Mask R-CNNs) and (Keypoint R-CNN) when the data rate is equal to Mbps.

(a) Gain w.r.t. local computing
(b) Gain w.r.t. pure offloading
(c) Gain with a neural filter w.r.t. local computing
(d) Gain with a neural filter w.r.t. pure offloading.
Fig. 4: Ratio of the total capture-to-output time of local computing and pure offloading to that of the proposed technique without (top)/with (bottom) a neural filter.

The gain with respect to pure offloading has the opposite trend. For extremely poor channels, the gain reaches , and decreases as the data rate increases until the threshold is hit at about Mbps. As we stated in Section II, the technique we developed provides an effective intermediate option between local computing and pure offloading, where our objective is to make the tradeoff between computation load at the mobile device and transmitted data as efficient as possible. Intuitively, in this context, naive splitting is suboptimal in any parameter configuration, as the larger time required to execute the (unaltered) head section at the mobile device compared to its execution at the server will offsets the moderate compression rate obtained only at the last backbone layers. Our technique is a useful tool in challenged networks where many devices contend for the channel resource, or the characteristics of the environment reduce the overall capacity, e.g., non-line of sight propagation, extreme mobility, long-range links, and low-power/low complexity radio transceivers.

Figure 3(c) and 3(d) show the same metric when the neural filter is introduced. In this case, when the neural filter predicts that the input pictures do not contain any object of interest, the tail model on an edge server are not executed. i.e., the system does not offload the rest of computing for such inputs, thus experiencing a lower delay. The effect is an extension of the data rate ranges in which the proposed technique is the best option, as well as a larger gain for some models. We remark that the results are computed using a specific dataset. Clearly, in this case the inference time is influenced by the ratio of empty pictures. The extreme point where all pictures contain objects collapses the gain to a slightly degraded - due to the larger computing load at the mobile device - version of the configuration without classifier. As the ratio of empty pictures increases, the classifier will provide increasingly larger gains.

Fig. 5: Component-wise delays of Keypoint R-CNN for different data rates. LC: Local Comp., PO: Pure Offload., SC: Split Comp., SCNF: Split Comp. with Neural Filter

We now report and analyze the absolute value of the capture-to-output delay for different configurations. Figure 5 shows the components of the delay as a function of the data rate when Keypoint R-CNN is the underlying detector. It can be seen how the communication delays (JPEG image) and tend to dominate with respect to computing components in the range where our technique is advantageous. The split approach introduces the local computing component associated with the execution of the head portion. Note that the difference between the execution of the tail portion () and the execution of the full model at the edge server () is negligible, due to the small size of the head model and the large computing capacity of the edge server. It can be observed how the execution time of the head model and the head model plus neural filter are almost equal. This signifies that, while the reduction in the total capture-to-output delay is perceivable, the extra classifier imposes a small additional computing load compared to the head model with bottleneck.

Vii Conclusions

Building on recent contributions proposing to split DNNs in head and tail sections [27], executed at the mobile device and edge server respectively, this paper presents a technique to efficiently split deep neural networks for object detection. The core idea is to achieve in-network compression introducing a bottleneck layer in the early stages of the backbone network. The output of the bottleneck is quantized and then sent to the edge server, which executes some layers to reconstruct the original output of head model and the tail portion. Additionally, we embed in the head model a low-complexity classifier which acts as a filter to eliminate pictures that do not contain objects of interest, further improving efficiency. We demonstrate that our generalized head network distillation can lead to models achieving state-of-the-art performance while reducing total inference in parameter regions where local and edge computing provide unsatisfactory performance.

References

  • [1] J. Ba and R. Caruana (2014) Do deep nets really need to be deep?. In NIPS 2014, pp. 2654–2662. Cited by: §III-B.
  • [2] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli (2012) Fog computing and its role in the internet of things. In Proceedings of the first edition of the MCC workshop on Mobile cloud computing, pp. 13–16. Cited by: §I.
  • [3] L. Bourdev and J. Malik (2009) Poselets: body part detectors trained using 3d human pose annotations. In

    2009 IEEE 12th International Conference on Computer Vision

    ,
    pp. 1365–1372. Cited by: Appendix A, §I.
  • [4] G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker (2017) Learning efficient object detection models with knowledge distillation. In Advances in Neural Information Processing Systems, pp. 742–751. Cited by: Appendix D, §III-B, §III-B.
  • [5] R. Chen, H. Ai, C. Shang, L. Chen, and Z. Zhuang (2019) Learning lightweight pedestrian detector with hierarchical knowledge distillation. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 1645–1649. Cited by: Appendix D, §III-B.
  • [6] J. Emmons, S. Fouladi, G. Ananthanarayanan, S. Venkataraman, S. Savarese, and K. Winstein (2019) Cracking open the dnn black-box: video analytics with dnns across the camera-cloud boundary. In Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, pp. 27–32. Cited by: Appendix D.
  • [7] J. Emmons, S. Fouladi, G. Ananthanarayanan, S. Venkataraman, S. Savarese, and K. Winstein (2019) Cracking open the dnn black-box: video analytics with dnns across the camera-cloud boundary. In Proceedings of the 2019 Workshop on Hot Topics in Video Analytics and Intelligent Edges, pp. 27–32. Cited by: Appendix D.
  • [8] A. E. Eshratifar, A. Esmaili, and M. Pedram (2019) BottleNet: a deep learning architecture for intelligent mobile cloud computing services. In 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), pp. 1–6. Cited by: Appendix D.
  • [9] L. Galteri, M. Bertini, L. Seidenari, and A. Del Bimbo (2018) Video compression for object detection algorithms. In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 3007–3012. Cited by: Appendix D.
  • [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. External Links: ISBN 978-1-4799-5118-5 Cited by: §III.
  • [11] R. Girshick (2015) Fast r-cnn. In Proceedings of the 2015 IEEE International Conference on Computer Vision, pp. 1440–1448. External Links: ISBN 978-1-4673-8391-2 Cited by: §III.
  • [12] P. M. Grulich and F. Nawab (2018) Collaborative edge and cloud neural networks for real-time video processing. In Proceedings of the VLDB Endowment, Vol. 11, pp. 2046–2049. Cited by: §I.
  • [13] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969. Cited by: Appendix A, §III-A, §III, §IV-B.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Appendix A, §V.
  • [15] G. Hinton, O. Vinyals, and J. Dean (2014) Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop: NIPS 2014, Cited by: §III-B.
  • [16] D. Hu and B. Krishnamachari (2020) Fast and accurate streaming cnn inference via communication compression on the edge. In 2020 IEEE/ACM Fifth International Conference on Internet-of-Things Design and Implementation (IoTDI), pp. 157–163. Cited by: Appendix D.
  • [17] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko (2018) Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713. Cited by: §IV-E, §IV-E.
  • [18] H. Jeong, I. Jeong, H. Lee, and S. Moon (2018)

    Computation offloading for machine learning web apps in the edge server environment

    .
    In 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1492–1499. Cited by: Appendix D.
  • [19] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang (2017) Neurosurgeon: collaborative intelligence between the cloud and mobile edge. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 615–629. External Links: ISBN 978-1-4503-4465-4, Document Cited by: Appendix D, Appendix D, §I, §III-B, §IV-A, §VI.
  • [20] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In Third International Conference on Learning Representations, Cited by: §IV-D.
  • [21] G. Li, L. Liu, X. Wang, X. Dong, P. Zhao, and X. Feng (2018) Auto-tuning neural network quantization framework for collaborative inference between the cloud and edge. In International Conference on Artificial Neural Networks, pp. 402–411. Cited by: Appendix D, Appendix D, §I, §IV-E.
  • [22] Q. Li, S. Jin, and J. Yan (2017) Mimicking very efficient network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6356–6364. Cited by: Appendix D, §III-B, §III-B.
  • [23] T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125. Cited by: §IV-A.
  • [24] L. Liu, H. Li, and M. Gruteser (2019) Edge assisted real-time object detection for mobile augmented reality. In The 25th Annual International Conference on Mobile Computing and Networking, pp. 25:1–25:16. Cited by: Appendix D, §I, §II, §II.
  • [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu, and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §III.
  • [26] Y. Matsubara, S. Baidya, D. Callegaro, M. Levorato, and S. Singh (2019) Distilled split deep neural networks for edge-assisted real-time systems. In Proceedings of the 2019 MobiCom Workshop on Hot Topics in Video Analytics and Intelligent Edges, pp. 21–26. Cited by: 5(j), 5(k), 5(l), 5(i), Appendix C, Appendix D, Appendix D, §I, §I, §I, §IV-A, §IV-A, §IV-A, §IV-B, §IV-C, §IV-C, §IV-D, TABLE III.
  • [27] Y. Matsubara and M. Levorato (2020) Split computing for complex object detectors: challenges and preliminary results. External Links: 2007.13312 Cited by: Appendix B, §I, §IV-A, §IV-A, §IV-A, §IV-C, §IV-E, TABLE III, §VII.
  • [28] L. O’Gorman and X. Wang (2018) Balancing video analytics processing and bandwidth for edge-cloud networks. In 24th International Conference on Pattern Recognition (ICPR 2018), pp. 2618–2623. Cited by: Appendix D.
  • [29] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: Appendix A, §III-A, §IV-B.
  • [30] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conf. on computer vision and pattern recognition, pp. 779–788. Cited by: §III.
  • [31] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §III-A, §III, §IV-B.
  • [32] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio (2015) FitNets: hints for thin deep nets. In Third International Conference on Learning Representations, Cited by: Appendix D.
  • [33] J. Shao and J. Zhang (2020) Bottlenet++: an end-to-end approach for feature compression in device-edge co-inference systems. In 2020 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1–6. Cited by: Appendix D.
  • [34] S. Teerapittayanon, B. McDanel, and H. T. Kung (2017) Distributed deep neural networks over the cloud, the edge and end devices. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 328–339. Cited by: Appendix D, Appendix D, §I.
  • [35] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol (2008)

    Extracting and composing robust features with denoising autoencoders

    .
    In Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. Cited by: §I.
  • [36] T. Wang, L. Yuan, X. Zhang, and J. Feng (2019) Distilling object detectors with fine-grained feature imitation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4933–4942. Cited by: Appendix D, §III-B, §III-B.
  • [37] X. Xie and K. Kim (2019) Source compression with bounded dnn perception loss for iot edge computer vision. In The 25th Annual International Conference on Mobile Computing and Networking, pp. 1–16. Cited by: Appendix D, §I.

Appendix A Benchmark R-CNNs

Faster R-CNN is the strong basis of several 1st-place entries [14] in ILSVRC and COCO 2015 competitions. The model is extended to Mask R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition [13]. Mask R-CNN is not only a strong benchmark, but also a well-designed framework, as it easily generalizes to other tasks such as instance segmentation and person keypoint detection.

In fact, Mask R-CNN was extended and trained for person keypoint detection (Keypoint R-CNN) in torchvision [29]

. In instance segmentation task, models need to precisely predict which pixels belong to which instance while detecting objects in an image. The keypoint detection task requires the estimation of human poses by simultaneously detecting persons and their keypoints (

e.g., joints and eyes) [3] in the image.

Appendix B Network Architectures

Table A-VI reports the network architectures of Layer 1 (L1) in the teacher and student models. Recall that all the teacher and student models in this study use the architectures of L1 shown in the table. The rest of the models’ architecture is not described as, other than L0 and L1, student models have exactly the same architectures as their teacher models. In the inference time evaluation, we split the student model at the bottleneck layer, (bold layer), to obtain head and tail models, that are executed on the mobile device and edge server, respectively. The head model consists of all the layers before and including the bottleneck layer, and the remaining layers are used as the corresponding tail model. We note that in addition to models with the introduced bottleneck used in this study that has 3 output channels, 6, 9, 12, and 15 output channels are used for the bottleneck introduced to exactly the same architectures in [27]. Such configurations, however, are not considered in this study as the ratios of the corresponding data sizes would be above 1 even with bottleneck quantization, that would result in further delayed inference, compared to pure offloading.

Similarly, Table A-VI summarizes the network architecture of the neural filter, where the output of L0 in the “frozen” student model is fed to the neural filter.

Teacher’s L1 Student’s L1
Conv2d(oc=64, k=1, s=1) Conv2d(oc=64, k=2, p=1)
BatchNorm2d BatchNorm2d
Conv2d(oc=64, k=3, s=1, p=1) Conv2d(oc=256, k=2, p=1)
BatchNorm2d BatchNorm2d
Conv2d(oc=256, k=1, s=1) ReLU
BatchNorm2d Conv2d(oc=64, k=2, p=1)
Conv2d(oc=256, k=1, s=1) BatchNorm2d
BatchNorm2d Conv2d(oc=3, k=2, p=1)
ReLU BatchNorm2d
Conv2d(oc=64, k=1, s=1) ReLU
BatchNorm2d Conv2d(64, k=2)
Conv2d(oc=64, k=3, s=1, p=1) BatchNorm2d
BatchNorm2d Conv2d(oc=128, k=2)
Conv2d(oc=256, k=1, s=1) BatchNorm2d(f=128)
BatchNorm2d ReLU
ReLU Conv2d(oc=256, k=2)
Conv2d(oc=64, k=1, s=1) BatchNorm2d
BatchNorm2d Conv2d(oc=256, k=2)
Conv2d(oc=64, k=3, s=1, p=1) BatchNorm2d
BatchNorm2d ReLU
Conv2d(oc=256, k=1, s=1)
BatchNorm2d
ReLU

oc: output channel, k: kernel size, s: stride, p: padding. A bold layer is our introduced bottleneck.

TABLE A-VI: Architecture of neural filter
Neural filter
AdaptiveAvgPool2d(oh=64, ow=64),
Conv2d(oc=64, k=4, s=2), BatchNorm2d, ReLU,
Conv2d(oc=32, k=3, s=2), BatchNorm2d, ReLU,
Conv2d(oc=16, k=2, s=1), BatchNorm2d, ReLU,
AdaptiveAvgPool2d(oh=8, ow=8),
Linear(in=1024, on=2), Softmax

oh: output height, ow: output width, in: input feature, on: output feature

TABLE A-V: Architectures of Layer 1 (L1) in teacher and student models
(a) Sample input 1
(b) Sample input 2
(c) Sample input 3
(d) Sample input 4
(e) Mask R-CNN: 5 persons, 1 sports ball, and 1 cell phone
(f) Mask R-CNN: 1 person and 1 snowboard
(g) Keypoint R-CNN: No object of interests
(h) Keypoint R-CNN: 3 persons 
(i) Our Mask R-CNN distilled by HND [26]: 5 persons and 1 backpack
(j) Our Mask R-CNN distilled by HND [26]: 2 persons and 1 bird  
(k) Our Keypoint R-CNN distilled by HND [26]: 1 person  
(l) Our Keypoint R-CNN distilled by HND [26]: 2 persons  
(m) Our Mask R-CNN in this work: 5 persons and 1 sports ball  
(n) Our Mask R-CNN in this work: 1 person and 1 snowboard  
(o) Our Keypoint R-CNN in this work: No object of interests  
(p) Our Keypoint R-CNN in this work: 2 persons  
Fig. A-6: Qualitative analysis. All figures are best viewed in pdf.

Appendix C Qualitative Analysis

Figure A-6 shows sampled input and output images from Mask and Keypoint R-CNNs. Comparing to the outputs of the original models (Figs. 5(e) - 5(h)), our Mask and Keypoint R-CNN detectors distilled by the original head network distillation (HND) [26] suffer from false positives and negatives shown in Figs. 5(i) - 5(k). As for those distilled by our generalized head network distillation, their detection performance look qualitatively comparable to the original models. In our examples, the only significant difference between outputs of the original models and ours is that a small cell phone hold by a white-shirt man that is not detected by our Mask R-CNN shown in Fig. 5(m).

Appendix D Related Work

Knowledge Distillation - There are several studies discussing knowledge distillation techniques for object detection tasks [22, 4, 36, 5]. However, their methods are designed to train smaller, lightweight object detectors which are not small enough to be deployed on mobile devices with limited computing capacity as shown in Tables I and II in the paper. Thus, such approaches would not be suitable for our problem setting.

In terms of design of training loss function, FitNets [32] would be the most related approach. FitNets is a stage-wise distillation method, and has two stages for training: “hint training” and knowledge distillation. Compared to FitNets, our generalized head network distillation has three advantages: (i) it does not require a regressor which FitNets requires only in training session for adjusting their “hint” layer’s output shape to match that of the “guided” layer, (ii) our loss function is more generalized and allows us to take care of outputs from multiple layers for training, that is shown critical for detection tasks in this study, and (iii) since the tail portion of the student model is identical to that of the pretrained teacher model, our approach is a one-stage training, thus can save training time.

Edge-Assisted Video Analytics - Mentioned in the introduction, the framework proposed in [24] provides one of the most complete examples of edge-assisted vision applications. In the paper, an augmented reality application is considered, where the mobile device performs object tracking assisted by an edge server executing object detection. In order to reduce the amount of data transported to the edge server while preserving detection accuracy, the mobile device increases the encoding quality in regions where objects were detected in previous frames. The wireless link to the edge server is assumed stable and with relatively high capacity. We observe that the data reduction approach taken in this paper may reduce the quality of detection when new objects frequently enter the vision range, as the picture quality in those regions may have been reduced. O’Gorman and Wang [28] discuss placement decision of methods between mobile and cloud computers for balancing processing time and bandwidth for a video analytics application.

Compression for Inference -  Xie and Kim [37] present an interesting modification of JPEG to match the needs of DNNs, rather than human perception. Yet, the approach still relies on traditional wavelet compression while our approach is, in essence, forcing the compressor to extract relevant features toward the final detection objective. Galteri et al. [9] propose an adaptive video coding method using a learned saliency, and show the proposed method outperforms standard H.265 in terms of speed and coding efficiency through experiments on YouTube Objects dataset.

DNN Splitting - Recent literature [19, 18, 21, 34] proposes to split DNN models and allocate the head and tail portions to the mobile device and edge server, respectively. Thus, instead of the original sensor data, the output of the last layer of the head model is transported over the wireless channel. The problem of this approach is that many DNN models, for instance those for image processing, concentrate most of the complexity at the early layers, while not providing any significant compression until late layers. Splitting the model as is, then, allocates exceeding complexity at the weakest computing platform, while not solving the capacity problem. In fact, in [19] it is shown that the optimal splitting point in most DNN models is actually pure local or edge computing depending on the channel capacity and relative devices’ computing power.

Following the work of Kang et al. [19], some of the recent contributions propose DNN splitting methods that alter the network architectures [8, 26, 6, 16, 33]. These studies, however, have the following issues: (i) lack of motivation to split the models as the size of the input data is exceedingly small, e.g., 32 32 pixels RGB images in [34, 16, 33], (ii) consider models and network conditions specifically selected in which their proposed method is advantageous [21], and/or (iii) proposed models assessed in simple classification tasks such as miniImageNet, Caltech 101, CIFAR -10, and -100 datasets [8, 26, 16, 33].

The interesting approach in [7] combines DNN splitting and compression in image classification applications. The core idea is to use entropy coding applied on a transformed version of the head network model’s output. The workshop paper does not discuss the resulting accuracy, and does not provide sufficient details to reproduce the results in the context considered in this paper. However, from the results it appears that this approach to compression is mostly effective when the splitting point is positioned at intermediate layers rather than early layers. High performance object detection models forward to the final detector the output of different sections of the backbone. As a consequence, compression should be applied to all these outputs up to the splitting point, thus significantly reducing the gain. The approach proposed herein achieves compression in the early layers of the backbone network to avoid this issue.

The head network distillation technique proposed in [26] uses distillation and bottleneck injection to boost the performance of splitting in DNNs for image classification. As discussed throughout the paper, the different, and more complex structure, of object detection models leads to a significantly different approach to the application of distillation and bottleneck injection. Moreover, the different nature of the analysis allowed us to develop the prefiltering classifier to eliminate frames not containing objects of interest.