LGD: Label-guided Self-distillation for Object Detection

09/23/2021
by   Peizhen Zhang, et al.
0

In this paper, we propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation). Previous studies rely on a strong pretrained teacher to provide instructive knowledge for distillation. However, this could be unavailable in real-world scenarios. Instead, we generate an instructive knowledge by inter-and-intra relation modeling among objects, requiring only student representations and regular labels. In detail, our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge. Modules in LGD are trained end-to-end with student detector and are discarded in inference. Empirically, LGD obtains decent results on various detectors, datasets, and extensive task like instance segmentation. For example in MS-COCO dataset, LGD improves RetinaNet with ResNet-50 under 2x single-scale training from 36.2 much stronger detectors like FCOS with ResNeXt-101 DCN v2 under 2x multi-scale training (46.1 CrowdHuman dataset, LGD boosts mMR by 2.3 Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51 cost beyond inherent student learning.

READ FULL TEXT

page 2

page 9

research
06/09/2021

Distilling Image Classifiers in Object Detectors

Knowledge distillation constitutes a simple yet effective way to improve...
research
01/26/2022

Adaptive Instance Distillation for Object Detection in Autonomous Driving

In recent years, knowledge distillation (KD) has been widely used as an ...
research
05/30/2022

Towards Efficient 3D Object Detection with Knowledge Distillation

Despite substantial progress in 3D object detection, advanced 3D detecto...
research
12/31/2022

Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching

Considering the computation complexity, we propose a Guided Hybrid Quant...
research
07/12/2022

HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors

Conventional knowledge distillation (KD) methods for object detection ma...
research
04/04/2023

Label-guided Attention Distillation for Lane Segmentation

Contemporary segmentation methods are usually based on deep fully convol...
research
12/09/2022

Co-training 2^L Submodels for Visual Recognition

We introduce submodel co-training, a regularization method related to co...

Please sign up or login with your details

Forgot password? Click here to reset