Distilling Object Detectors via Decoupled Features

03/26/2021
by   Jianyuan Guo, et al.
0

Knowledge distillation is a widely used paradigm for inheriting information from a complicated teacher network to a compact student network and maintaining the strong performance. Different from image classification, object detectors are much more sophisticated with multiple loss functions in which features that semantic information rely on are tangled. In this paper, we point out that the information of features derived from regions excluding objects are also essential for distilling the student detector, which is usually ignored in existing approaches. In addition, we elucidate that features from different regions should be assigned with different importance during distillation. To this end, we present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector. Specifically, two levels of decoupled features will be processed for embedding useful information into the student, i.e., decoupled features from neck and decoupled proposals from classification head. Extensive experiments on various detectors with different backbones show that the proposed DeFeat is able to surpass the state-of-the-art distillation methods for object detection. For example, DeFeat improves ResNet50 based Faster R-CNN from 37.4 based RetinaNet from 36.5 is available at https://github.com/ggjy/DeFeat.pytorch.

READ FULL TEXT

page 2

page 5

page 8

research
06/09/2021

Distilling Image Classifiers in Object Detectors

Knowledge distillation constitutes a simple yet effective way to improve...
research
08/28/2023

Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection

Knowledge distillation (KD) has shown potential for learning compact mod...
research
04/07/2019

Learning Metrics from Teachers: Compact Networks for Image Embedding

Metric learning networks are used to compute image embeddings, which are...
research
07/05/2022

PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient

Knowledge distillation(KD) is a widely-used technique to train compact m...
research
10/07/2022

IDa-Det: An Information Discrepancy-aware Distillation for 1-bit Detectors

Knowledge distillation (KD) has been proven to be useful for training co...
research
06/09/2019

Distilling Object Detectors with Fine-grained Feature Imitation

State-of-the-art CNN based recognition models are often computationally ...
research
04/24/2023

Function-Consistent Feature Distillation

Feature distillation makes the student mimic the intermediate features o...

Please sign up or login with your details

Forgot password? Click here to reset