D^3ETR: Decoder Distillation for Detection Transformer

11/17/2022
by   Xiaokang Chen, et al.
0

While various knowledge distillation (KD) methods in CNN-based detectors show their effectiveness in improving small students, the baselines and recipes for DETR-based detectors are yet to be built. In this paper, we focus on the transformer decoder of DETR-based detectors and explore KD methods for them. The outputs of the transformer decoder lie in random order, which gives no direct correspondence between the predictions of the teacher and the student, thus posing a challenge for knowledge distillation. To this end, we propose MixMatcher to align the decoder outputs of DETR-based teachers and students, which mixes two teacher-student matching strategies, i.e., Adaptive Matching and Fixed Matching. Specifically, Adaptive Matching applies bipartite matching to adaptively match the outputs of the teacher and the student in each decoder layer, while Fixed Matching fixes the correspondence between the outputs of the teacher and the student with the same object queries, with the teacher's fixed object queries fed to the decoder of the student as an auxiliary group. Based on MixMatcher, we build Decoder Distillation for DEtection TRansformer (D^3ETR), which distills knowledge in decoder predictions and attention maps from the teachers to students. D^3ETR shows superior performance on various DETR-based detectors with different backbones. For example, D^3ETR improves Conditional DETR-R50-C5 by 7.8/2.4 mAP under 12/50 epochs training settings with Conditional DETR-R101-C5 as the teacher.

READ FULL TEXT
research
11/15/2022

Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling

DETR is a novel end-to-end transformer architecture object detector, whi...
research
08/17/2023

Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation

Resource-constrained perception systems such as edge computing and visio...
research
12/05/2018

Knowledge Distillation from Few Samples

Current knowledge distillation methods require full training data to dis...
research
06/14/2021

CoDERT: Distilling Encoder Representations with Co-learning for Transducer-based Speech Recognition

We propose a simple yet effective method to compress an RNN-Transducer (...
research
11/17/2022

DETRDistill: A Universal Knowledge Distillation Framework for DETR-families

Transformer-based detectors (DETRs) have attracted great attention due t...
research
04/05/2021

Compressing Visual-linguistic Model via Knowledge Distillation

Despite exciting progress in pre-training for visual-linguistic (VL) rep...
research
02/01/2022

Local Feature Matching with Transformers for low-end devices

LoFTR arXiv:2104.00680 is an efficient deep learning method for finding ...

Please sign up or login with your details

Forgot password? Click here to reset