Distilling Object Detectors With Global Knowledge

10/17/2022
by   Sanli Tang, et al.
0

Knowledge distillation learns a lightweight student model that mimics a cumbersome teacher. Existing methods regard the knowledge as the feature of each instance or their relations, which is the instance-level knowledge only from the teacher model, i.e., the local knowledge. However, the empirical studies show that the local knowledge is much noisy in object detection tasks, especially on the blurred, occluded, or small instances. Thus, a more intrinsic approach is to measure the representations of instances w.r.t. a group of common basis vectors in the two feature spaces of the teacher and the student detectors, i.e., global knowledge. Then, the distilling algorithm can be applied as space alignment. To this end, a novel prototype generation module (PGM) is proposed to find the common basis vectors, dubbed prototypes, in the two feature spaces. Then, a robust distilling module (RDM) is applied to construct the global knowledge based on the prototypes and filtrate noisy global and local knowledge by measuring the discrepancy of the representations in two feature spaces. Experiments with Faster-RCNN and RetinaNet on PASCAL and COCO datasets show that our method achieves the best performance for distilling object detectors with various backbones, which even surpasses the performance of the teacher model. We also show that the existing methods can be easily combined with global knowledge and obtain further improvement. Code is available: https://github.com/hikvision-research/DAVAR-Lab-ML.

READ FULL TEXT
research
06/09/2019

Distilling Object Detectors with Fine-grained Feature Imitation

State-of-the-art CNN based recognition models are often computationally ...
research
09/27/2021

Deep Structured Instance Graph for Distilling Object Detectors

Effectively structuring deep knowledge plays a pivotal role in transfer ...
research
04/19/2021

Distilling Knowledge via Knowledge Review

Knowledge distillation transfers knowledge from the teacher network to t...
research
05/30/2022

Towards Efficient 3D Object Detection with Knowledge Distillation

Despite substantial progress in 3D object detection, advanced 3D detecto...
research
05/21/2022

Knowledge Distillation from A Stronger Teacher

Unlike existing knowledge distillation methods focus on the baseline set...
research
07/28/2023

Multiple Instance Learning Framework with Masked Hard Instance Mining for Whole Slide Image Classification

The whole slide image (WSI) classification is often formulated as a mult...
research
06/23/2020

iffDetector: Inference-aware Feature Filtering for Object Detection

Modern CNN-based object detectors focus on feature configuration during ...

Please sign up or login with your details

Forgot password? Click here to reset