where stands for a pair of image and ground-truth label; is the dataset distribution; and represents the detection loss, which is usually composed of classification terms and bounding-box regression terms .
Typically, the backbone part contains too many parameters, thus may be nontrivial or very costly to be directly optimized in the detection dataset [45, 34, 22, 13]. A common practice is to introduce pretraining, for instance, initializing in Eq. 1 with ImageNet pretrained [32, 30, 31, 23, 26, 24] or self-supervised [7, 12] weights. Though such pretraining-then-finetuning paradigm has been demonstrated to achieve state-of-the-art performances [14, 4], however, we find that only pretraining backbone weights may be suboptimal for the optimization. Since the weights in detection head are still randomly initialized, during training, gradient passed from the detection head to the backbone could be very noisy, especially in the very beginning. The noisy gradient may significantly harm the pretrained weights, causing slower convergence or poorer performance. Actually, such degradation has been observed in many codebases and a few workarounds are also proposed. For example, a well-known workaround is to freeze a few weight layers in the backbone during finetuning to avoid unstable optimization [32, 11]; however, it seems still insufficient to fully address the issue.
In this paper, we propose to deal with the problem from a new direction – introducing an auxiliary intermediate supervision directly to the backbone. The key motivation is, if we can provide a feasible supervision in the training phase, the backbone part could be effectively optimized even before the detection head converges. We formulate our method as follows:
where means the auxiliary loss attached to the outputs of the backbone, which is independent to the detection head thus not affected by the latter’s convergence progress. is the balanced coefficient.
The core of our approach thus includes the design of . Intuitively, the auxiliary supervision aims to minimize the distance between latent feature representation and some “ideal” embedding of the corresponding training sample. However, how to define and calculate the desired representation? Some previous works, especially Knowledge Distillation  methods, suggest acquiring the intermediate supervision from more powerful teacher models; nevertheless, whose representations are not guaranteed to be optimal. Instead in this work, for the first time, we point that the inverse of the underlying optimal detection head (i.e. ) could be the feasible embedding, which traces the ground-truth label back to the corresponding latent feature. More discussion will be referred in Sec. 3.
Motivated by the analyses, in our proposed method LabelEnc, we introduce a novel label encoding function to realize in Eq. 2, which maps the ground-truth labels into the latent embedding space thus providing an auxiliary intermediate supervision to the detector’s training. Label encoding function is designed to approximate , since the underlying optimal parameters and the “inverse form” in the latter’s formulation are nontrivial to be directly derived in mathematics. Thus our method in general involves a two-step training pipeline. First, to learn the label encoding function, we train an AutoEncoder architecture, embedding the ground-truth labels into the latent space according to the (approximated) optimal detection head. Second, with the help of the learned label encoding function, we optimize Eq. 2 under the auxiliary supervision ; in addition, initial weights in the detection head can also inherit from the AutoEncoder instead of random values for more stable optimization.
We evaluate our method on various object detection systems. Under different backbones (e.g. ResNet-50, ResNet-101  and Deformable Convolutional Networks [9, 46]) or detection frameworks (e.g. RetinaNet , FCOS  and FPN ), on each of them our training pipeline achieves significant performance gains consistently, e.g. 2% improvements on COCO  dataset. More importantly, our method is completely cost-free in inference time, as the auxiliary structures only exist during training. Please refer to Sec. 4 for detailed results.
In conclusion, the major contributions of our paper are as follows:
We propose a new auxiliary intermediate supervision method named LabelEnc, to boost the training of object detection systems. With the novel label encoding function, our method can effectively overcome the drawbacks of randomly initialized detection heads, leading to more stable optimization and better performance.
Our method is demonstrated to be generally applicable in most of modern object detection systems. Compared with previous methods like [21, 46, 34], though various auxiliary losses are also introduced, usually those methods rely on specified backbone architectures or detection frameworks. Furthermore, though the underlying formulations appear to be somewhat complex, the implementation of our approach is relatively simple. Code will be released soon.
2 Related Work
2.0.1 Auxiliary supervision.
Auxiliary Supervision is a common technique to improve the performance of the model in indirect ways, e.g., weight decay, Center Loss , etc. Among various auxiliary supervision methods, Multi-task Learning (MTL)  methods are used commonly. MTL solves multiple tasks using a single model. By sharing parameters between the tasks, inductive bias is transferred and better generalization is gained. In object detection, Mask R-CNN  combines object detection with instance segmentation by adding a simple mask branch to Faster R-CNN model . The MTL strategy can improve the performance of the detection branch efficiently, but it requires additional mask annotation. , on the contrary, does not need additional annotations, but it requires carefully-designed auxiliary tasks.
Deeply Supervise is another common method of auxiliary supervise. Instead of introducing additional tasks, Deeply Supervise introduces supervision on additional layers. DSN  first proposes the concept by adding additional supervision on the hidden layers. Inception 
also uses similar auxiliary classifiers on lower stages of the network. In semantic segmentation,PSPNet  and ExFuse  adopt Deeply Supervise in order to improve the low-level features. In object detection, DSOD  utilizes Deeply Supervise with dense connections to enable from-scratch training. In our method, we adopt the idea of Deeply Supervise by proposing a label encoding function, with which we can map the labels into latent embedding for auxiliary intermediate supervision.
2.0.2 Knowledge distillation.
our method shares some common inspiration with Knowledge Distillation [17, 33, 42, 40]. In Knowledge Distillation, the training is a two-step process. A large teacher model is trained first. Then its predictions are used to supervise a smaller student model. Knowledge Distillation has been used in several fields, e.g., face , speech , re-id . There are several works focusing on object detection as well:  uses balanced loss on classification, bounded loss on regression and loss on feature;  and  propose their distillation methods based on RoIs;  so that distillation focuses on object-local areas.
From the distillation perspective, the label encoding function is the teacher model in our pipeline. It is trained in the first step and utilized for supervision in Step 2. But it is a relatively simple architecture and does not involve feature in real world. On the contrary, traditional distillation models rely heavily on a big teacher model. Usually, the stronger the teacher model is, the better distillation performance it can give. However, teacher models with high performance are not always available in practice. The state-of-the-art models are the best teachers we can find. This limits the performance of traditional distillation.
2.0.3 Label encoding.
There are several works that use label encoding to boost training [3, 1, 35]. However, few evaluate in supervised object detection task. Among them, our method is most similar to .  uses an AutoEncoder to model the labels of semantic segmentation. The AutoEncoder is then used to perform auxiliary supervision. Compared with our method, there are two main differences: first, in object detection, label structures hardly exist. Segmentation has rich information in label structures thanks to the outline of regions in annotation, e.g. a cat has a long tail, a thick body and a small head. Whereas in object detection, such structures are very limited, since all objects are just boxes with different scales and aspect ratios. Second, we propose a joint optimization scheme that introduces auxiliary structures for training AutoEncoder, which we empirically find vital to the performance. Whereas in , the AutoEncoder is trained independently.
3.1 Intermediate Auxiliary Supervision
As mentioned in the introduction, the core of our method is to define the supervision term in Eq. 2, which is expected to provide feasible supervision to the backbone training. Intuitively, the auxiliary loss should encourage the latent feature generated by the backbone network to be close to some “ideal” embedding for each training sample:
where represents the distance measurement. Therefore, a problem rises: how to define the so-called “ideal” feature ? Obviously, the calculation of cannot directly rely on the training of the detection head , otherwise it may be unstable and redundant to the existing detection loss .
Let us think for a further step. If we have finished the optimization in Eq. 2 via some way, i.e. the corresponding optimal weights and have been obtained, we can intuitively define the inverse of the detection head as the “optimal” intermediate embedding. So,
We argue that the definition of is feasible because if the auxiliary loss tends to zero, it is easy to verify that the detector will predict the ground truth exactly. Unfortunately, Eq. 4 cannot be directly used in the optimization. First, to substitute Eq. 4 into Eq. 2, we find exists in both side of the equation – we cannot determine the value in advance. Second, even though is given, the inverse form
is still difficult to be calculated due to the high nonlinearity of neural networks (actually the inverse is generally not unique).
We deal with the second problem firstly. Notice that for any , we have . Motivated by this, to approximate we introduce a new network , whose parameters are learned by the optimization:
Here, is the detection loss, following the definition in Eq. 1. Intuitively, maps the ground truth label into the latent feature space and recovers the label from the latent representation. So, we say that approximates the “inverse” of . It is worth noting that the composite function actually represents an AutoEncoder defined in the label space. Thus we name as label encoding function. Thanks to the approximation, we rewrite Eq. 4 as follows:
Then we come back to the first problem. In Eq. 6, note that the optimization of still implies (Eq. 5). So, in our formulations (Eq. 2, 6 and 5) there still exists the recursive dependence on . To get out of the dilemma, we use an unrolling trick, i.e. recursively substituting Eq. 6 and Eq. 5 into Eq. 2. Thus we obtain the final formulations (please refer to the appendix for the detailed derivation):
Here is short for .
Eq. 7 and Eq. 8 compose the core idea of our method. The formulations actually imply a two-step training pipeline. In the first step, by optimizing the auxiliary AutoEncoder defined in Eq. 8, we obtain an encoding function mapping the ground-truth label map into the latent space. Then in the second step, we train the detection framework with the intermediate supervision of , as described in Eq. 7. In the next subsections, we will introduce the optimization details.
3.2 Step 1: AutoEncoder Training
In this subsection we aim to derive the label encoding function via Eq. 8. However, directly solving Eq. 8 is not easy – since exists in both the target and the constraint, it is actually a bilevel optimization problem, which seems nontrivial to be implemented with current deep learning tools. Therefore, we propose to relax the formulation into joint optimization scheme, as follows:
where and are balanced coefficients, while in our experiment we just trivially set them to 1. It is clear that Eq. 9 simply corresponds to a multi-task training paradigm with three loss terms: the first one is reconstruction loss (L1) for the label’s AutoEncoder; the second term is the common detection loss (L2), which enforces to be a valid detection head; the third loss (L3) minimizes the gap between the two latent spaces (namely the outputs of the backbone and label encoding function respectively).
Fig. 1 illustrates the implementation and optimization of Eq. 9. According to Eq. 9, the same detection head is applied in both L1 and L2 terms – which is why we mark “shared detection head” in Fig. 1. It is also worth noting that we forbid the gradient flow from L3 to the label encoding function . The motivation is, in Eq. 8 (which is the original form of Eq. 9), the optimization of does not directly affect , thus we follow the property in the implementation. We empirically find the above details are critical to improve the final performance.
Before optimization, we follows the common practice of initialization method, i.e. using pretrained weights (e.g. pretrained on ImageNet ) for backbone parameters and Gaussian random weights for and . One may argue that according to the introduction, randomly initialized detection head may cause unstable training. But actually, since this training step mainly aims to learn the label encoding function , the detection backbone and the detection head are thus “auxiliary structures” in this step, whose performances are not that important. Furthermore, as we will introduce, the architecture of is relatively simple, so the optimization seems not difficult.
3.3 Step 2: Detector Training with Intermediate Supervision
After the label encoding function has been learned, we then use it as the intermediate supervision to improve object detector training, according to Eq. 7. Fig. 2 illustrates the implementation. In addition to the common detection loss, we introduce an auxiliary loss to directly supervise the detection backbone. The coefficient is also trivially set to 1. Besides, Eq. 7 also suggests that is fixed rather than optimization variable. So, we block the gradient flow from the auxiliary loss to , as shown in the figure. After training, the auxiliary structure – – is then removed. The resulted is the learned object detector we expected.
Another important detail on the implementation is initialization. From Eq. 7 and Eq. 8 we know that in the two training steps, the detection backbones and the detection heads shares the same network architecture respectively, however, whose parameters are not necessarily the same. So, in Step 2, we reinitialize the the backbone parameters (using ImageNet pretrained weights, for instance) before training. As for the detection head parameters , empirically we find that initializing them with the corresponding parameters learned in Step 1 (see Eq. 9) results in better performance and stable convergence. It may be because the pretrained detection head can provide less gradient noise to the backbone, compared with the randomly initialized heads.
3.4 Implementation Details and Remarks
3.4.1 Ground-truth label representation.
As mentioned above, in both two training steps the label encoding function needs to take ground-truth labels as the network inputs. It is nontrivial because in detection task, each image contains different numbers of instances, each of which may have various class labels and bounding boxes. We have to produce a fixed-length label map that contains all the ground-truth information for each image.
We propose to use a tensor to represent the ground-truth objects in one image, where equals to the image size and is the number of classes in the dataset (e.g. 80 for COCO  dataset). For an object of the -th class, we fill the corresponding region (according to the bounding box) in the -th channel with positive values: the value ranges from 1 at the object center to 0.5 in the box boundary, which decays linearly. Fig. 1 and Fig. 2
visualize the encoding. Specially, if two bounding boxes of the same class overlap with each other, the joint region is filled with larger values of those calculated separately. Additionally, in training, the boxes are augmented by multiplying a random number between 0 and 1 with a probability of 0.5. Other values in the tensor remain to be zeros.
3.4.2 Architecture of label encoding function.
For ease of optimization, we use relatively simple architecture to implement . The design of the structure is inspired by ResNet , while the number of residual blocks in each stage reduces to respectively. In addition, the Max Pooling
layer is replaced by stride convolution. The input channels is set to 80 to satisfy the number of classes inCOCO  dataset. Batch Normalization  is not used here. We use the same architecture for all experiments in the paper. Please refer to the appendix for details.
3.4.3 Multi-scale intermediate supervision.
Recently, state-of-the-art detection frameworks like [23, 24, 38] usually introduce Feature Pyramid Networks (FPNs) to generate multi-scale feature maps, which greatly improves the capacity to detect objects of various sizes. Our approach can be easily generalized to multi-scale cases. First, we attach one FPN structure to the label encoding function so that it can produce multi-resolution representations. Then in both Step 1 and Step 2, we make the intermediate supervision terms (see Eq. 9 and Eq. 7) applied on all the scale levels. As shown in the following experiments, our method can effectively boost the detection frameworks with FPNs.
3.4.4 Distance measurement.
In Eq. 7 and Eq. 9, the distance measurement term is used to minimize the difference between two feature maps. One simple alternative is to use -distance directly. However, there are several issues as follows: 1) the sizes of the two feature maps may be different; 2) since the feature maps are generated from different domains respectively, directly minimizing their difference may suffer from very large gradient. So, we propose to introduce a feature adaption block into the distance measurement, which is defined as follows:
where means Layer Normalization ; is -distance; and are feature maps derived from the backbone and the label encoding function respectively. represents feature adaption network, which acts as the transformer between the two domains. We implement with three convolution layers, whose kernel size is and number of channels is 256. The parameters are learned jointly with the outer optimization. Similar to , is also an auxiliary structure thus will be discarded after training.
All our experiments are done with PyTorch . We use COCO  dataset to evaluate our method. Following the common practice [23, 24], we train our models with the union of 80k train images and a subset of 35k validation images (trainval35k). We test our models in the rest 5k of validation images (minival). All results are evaluated with mmAP, i.e. mAP@[0.5,0.95], using common single-scale test protocol. For both training and inference, we resize each images to 800 pixels on the shorter edge. The training batch size is a total of 16 in 8 GPUs. We mainly use so-called 1 schedule for training, which refers to 90k iterations with two learning rate decays at 60k and 80k iteration. We use almost the same training protocol for our Step 1 and Step 2 training, as well as all the counterpart baseline models respectively, with two exceptions: for Step 1, we find that adding L3 from the beginning cause L3 to be nearly zero. The network somehow finds a way to cheat, causing terrible results. So we add an additional 30k warmup iterations without L3, which we find sufficient to solve the problem; for Step 2, we remove the auxiliary loss in the last 10k iterations, which results in minor improvements. Since our training pipeline involves two steps, the total number of the iterations thus doubles. For fair comparison, we provide 2 schedule for baseline models as well, which refers to 180k iterations with two learning rate decays at 120k and 160k iteration.
4.2 Main Results
In order to show the effectiveness of our model on different detection frameworks, we evaluate our method on RetinaNet , FCOS  and FPN , which are representative baselines of one-stage detectors, anchor-free methods and two-stage frameworks respectively. We also evaluate our method on various commonly-used backbones, including ResNet-50, ResNet-101  and Deformable Convolutional Networks (DCNs) .
Results are presented in Table 1. Compared with the counterparts with 1 schedule, our method achieves performance gains of over 2% on both ResNet-50 and ResNet-101 backbones. On ResNet-101-DCN, there are still relative improvements of 1.4% in average. Compared with the baselines of 2 schedule, the gap becomes closer but still remains considerable, which suggests that our improvements are not mainly brought by more training iterations. It is worth noting that although our training pipeline doubles the total number of iterations, we argue that our label encoding function can usually be reused among different backbones (see the next subsection). Therefore in practice, we usually only need to run Step 1 only once for different models.
4.3 Ablation Study
4.3.1 Step 1: is joint optimization required?
In Sec. 3.2, to optimize Eq. 9 we propose a joint optimization scheme to take all the three loss terms into account. Recall that in Step 1, only the learned label encoding function will be reserved into the next stage. As a result, one may argue that whether the auxiliary structure, i.e. the detection backbone, is really necessary in training. In other words, the question is, can we only use L1 (AutoEncoder reconstruction loss) in Eq. 9 for this step? If it is true, the training step can be further simplified. Unfortunately, we find it not the case.
To validate the argument, we conduct a comparison by removing L2 and L3 in Eq. 9 to derive the label encoding function. Other settings such as Step 2 keep unchanged. The results are listed in Table 2, while the modified counterparts are marked with “reconstruction loss only”. We compare them on RetinaNet with ResNet-50 and ResNet-101 backbones. It is clear that, without the auxiliary backbone, our method (although still outperforms baseline models) shows significant degradation in precision.
|Ours (reconstruction loss only)||36.9|
|Ours (reconstruction loss only)||39.0|
Discussion and remarks.
In Step 1, although the existence of auxiliary structures is vital, we find the exact weights in the backbone are actually less important. From Eq. 7 and Eq. 8, we know that does not affect the optimization of directly. It only contributes to the optimization of . Also, unlike which is inherited from for initialization, is reinitialized exactly in Step 2. Therefore, the trained auxiliary detection backbone in Step 1 is completely discarded.
The observation inspires an interesting assumption: is the final performance actually insensitive to the detailed backbone architecture in Step 1? We try to verify the guess by using different backbones in Step 1 and Step 2. As reported in Table 3, we use ResNet-50 as the auxiliary backbone in the first step. Whereas in Step 2, the final detection backbone is ResNet-101. Compared with the model whose backbones in both stages are ResNet-101, the performances almost keep the same. The new finding thus suggests another advantage of our method in practice. The label encoding function can be pretrained once but re-used for multiple detectors with different backbones, as long as they have the same detection head. This property of our method greatly reduces the cost of the practical applications.
|Step 1 Backbone||Step 2 Backbone||mmAP (%)|
4.3.2 Is Step 1 alone sufficient?
In Step 1, we only aim to solve the label encoding function for later intermediate supervision. However, the training framework in Step 1 is quite similar to that in Step 2, and there is a detection model (the auxiliary structure) that can be proceeded for testing. Intuitively, the detection model in Step 1 should improve as well. One may even guess that Step 1 alone is sufficient. We show the ablation in Table 4. We only use Step 1 and test the performance of the detection model (the auxiliary structure). We compare them on multiple models with ResNet50 backbone. Step1-only can indeed improve the detection model over baseline, but clearly it alone is not sufficient.
4.3.3 Step 2: do intermediate supervision and initialization matter?
In Step 2, we use two methods to facilitate the optimization, i.e. intermediate supervision on the backbone as well as the initialization of the detection head. In Table 5, we show the ablation studies on them. The baseline framework is RetinaNet  with ResNet-50 backbone. We also make the combinational studies of the case that using reconstruction loss only in Step 1 (please refer to Table 2). The results suggest that both methods contribute to the final performance.
|Step 1||Supervision||Initialization||mmAP (%)|
4.4 Comparison with Knowledge Distillation
Our two-step pipeline resembles Knowledge Distillation (KD). Actually, if we train an object detector alone in Step 1 instead of our label encoding function with a joint framework, and use it in Step 2 for supervision, the method becomes KD. In Table 6 we show comparison between our method and the alternative mentioned above, denoted as “Vanilla KD”. On a lightweight backbone, i.e. MobileNet, our method can reach similar performance to Knowledge Distillation, although we only use a label encoding function instead of a heavy ResNet-50 that extracts “real” features. On a heavier backbone, i.e. ResNet-50, our method outperforms KD with ResNet-50 and ResNet-101 as teachers, whose improvements are limited due to the small performance gap between teacher and student. Knowledge distillation requires a teacher network that is strong enough, which is usually not easy to find when the student network is already strong. Our method, on the other hand, is not limited by it.
|Ours||Label Encoding Function||29.8|
|Ours||Label Encoding Function||38.4|
4.5 Performance on Mask Prediction
Above we mainly focus on object detection. However, our previous discussion when proposing the method (Sec. 1 and Sec. 3) is based on the structure and optimization of detection networks, not object detection task itself. Thus it is likely that our method can be extended to other tasks with similar framework. We tested our method on Mask R-CNN , which produces mask prediction in instance segmentation, but has a similar framework to FPN. It is worth noting that for Mask R-CNN, we use masks instead of boxes as the input for label encoding function. Results are presented in Table 7. It indicates our method improves mask prediction as well.
In this paper, we propose a new training pipeline for object detection systems. We design a feature encoding function and utilize it to introduce intermediate supervision on the detection backbone. Our method is generally applicable and efficient, adding no extra cost in inference time. To show its ability, we evaluate it on a variety of detection models and gain consistent improvement.
Label-embedding for attribute-based classification.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 819–826. Cited by: §2.0.3.
-  (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §3.4.4.
-  (2010) Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems, pp. 163–171. Cited by: §2.0.3.
-  (2018) Cascade r-cnn: delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6154–6162. Cited by: §1.
-  (1997) Multitask learning. Machine learning 28 (1), pp. 41–75. Cited by: §2.0.1.
-  (2017) Learning efficient object detection models with knowledge distillation. In Advances in Neural Information Processing Systems, pp. 742–751. Cited by: §2.0.2.
-  (2020) A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709. Cited by: §1.
-  (2018) Darkrank: accelerating deep metric learning via cross sample similarities transfer. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.0.2.
-  (2017) Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 764–773. Cited by: §1, §4.2.
-  (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §3.2.
-  (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1, §1.
-  (2019) Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722. Cited by: §1.
-  (2019) Rethinking imagenet pre-training. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4918–4927. Cited by: §1.
-  (2017) Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969. Cited by: §1, §1, §2.0.1, §4.5.
-  (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37 (9), pp. 1904–1916. Cited by: §1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §3.4.2, §4.2.
-  (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §1, §2.0.2.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.4.2.
-  (2015) Deeply-supervised nets. In Artificial intelligence and statistics, pp. 562–570. Cited by: §2.0.1.
-  (2019) Multi-task self-supervised object detection via recycling of bounding box annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4984–4993. Cited by: §2.0.1.
-  (2017) Mimicking very efficient network for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6356–6364. Cited by: 2nd item, §2.0.2.
-  (2018) Detnet: a backbone network for object detection. arXiv preprint arXiv:1804.06215. Cited by: §1.
-  (2017) Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §1, §1, §1, §3.4.3, §4.1, §4.2, Table 1.
-  (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §1, §1, §1, §3.4.3, §4.1, §4.2, §4.3.3, Table 1.
-  (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1, §3.4.1, §3.4.2, §4.1.
-  (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1, §1.
Face model compression by distilling knowledge from neurons. In Thirtieth AAAI conference on artificial intelligence, Cited by: §2.0.2.
-  (2018) Regularizing deep networks by modeling and predicting label structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5629–5638. Cited by: §2.0.3.
-  (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §4.1.
-  (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1, §1.
-  (2017) YOLO9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271. Cited by: §1, §1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §1, §2.0.1.
-  (2014) Fitnets: hints for thin deep nets. arXiv preprint arXiv:1412.6550. Cited by: §2.0.2.
-  (2017) Dsod: learning deeply supervised object detectors from scratch. In Proceedings of the IEEE international conference on computer vision, pp. 1919–1927. Cited by: 2nd item, §1, §2.0.1.
-  (2017) Label embedding network: learning label representation for soft training of deep networks. arXiv preprint arXiv:1710.10393. Cited by: §2.0.3.
-  (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §2.0.1.
-  (2016) Recurrent neural network training with dark knowledge transfer. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5900–5904. Cited by: §2.0.2.
-  (2019) Fcos: fully convolutional one-stage object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9627–9636. Cited by: §1, §1, §3.4.3, §4.2, Table 1.
-  (2019) Distilling object detectors with fine-grained feature imitation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4933–4942. Cited by: §2.0.2.
-  (2018) Kdgan: knowledge distillation with generative adversarial networks. In Advances in Neural Information Processing Systems, pp. 775–786. Cited by: §2.0.2.
A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pp. 499–515. Cited by: §2.0.1.
Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928. Cited by: §2.0.2.
-  (2018) Exfuse: enhancing feature fusion for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 269–284. Cited by: §2.0.1.
-  (2017) Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881–2890. Cited by: §2.0.1.
-  (2019) ScratchDet: training single-shot object detectors from scratch. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2268–2277. Cited by: §1.
-  (2019) Deformable convnets v2: more deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9308–9316. Cited by: 2nd item, §1, §2.0.2.
Appendix 0.A Architecture of Label Encoding Function
|Stage||Block||Kernel Size||Stride||Output Channels|
Appendix 0.B Derivation of Eq. (7, 8)
In Sec. 1 and Sec. 3 we introduce our model as follows:
where the optimal weights of the label encoding function () is derived from:
Clearly, there exist nested dependencies on the two variables and . Thus the above equations are infeasible to compute directly.
Notice that in Eq. 11, actually acts as a constant in the optimization. We define a function as follows:
Eq. 14 suggests that we need to find a certain satisfying that the optimal point of the partial function is also , i.e. . It motivates us to approximate with the following optimization, since Eq. 14 is nontrivial to compute directly:
which derives our formulations in the text.
Appendix 0.C Feature Visualization
In this section we analyze our method with visualization on feature maps. We pick the second layer of the multi-scale feature maps from RetinaNet. We use images in validation set. We visualize each feature map with its intensity. Specifically, we use the norm of each pixel. The larger the norm is (i.e., the stronger the intensity), the brighter it is in the figure. Visualization results are shown in Fig. 3. Four columns are: (a) The original images. (b) Feature from baseline models. (c) Feature from our model. (d) Feature from our encoding function, which is the optimization target for (c). Compared with feature from baseline, which has clear boundary at the outline of each object, feature from ours is closer to boxes. Under the supervision from (d), the feature extends outside the object outline and “tries to reach the box edges”. We believe this is beneficial for later instance extraction by detection head. Note that Fig. 3 is just a spatial projection of features. Information across channels is not visible here.