Beyond Classification: Knowledge Distillation using Multi-Object Impressions

10/27/2021
by   Gaurav Kumar Nayak, et al.
30

Knowledge Distillation (KD) utilizes training data as a transfer set to transfer knowledge from a complex network (Teacher) to a smaller network (Student). Several works have recently identified many scenarios where the training data may not be available due to data privacy or sensitivity concerns and have proposed solutions under this restrictive constraint for the classification task. Unlike existing works, we, for the first time, solve a much more challenging problem, i.e., "KD for object detection with zero knowledge about the training data and its statistics". Our proposed approach prepares pseudo-targets and synthesizes corresponding samples (termed as "Multi-Object Impressions"), using only the pretrained Faster RCNN Teacher network. We use this pseudo-dataset as a transfer set to conduct zero-shot KD for object detection. We demonstrate the efficacy of our proposed method through several ablations and extensive experiments on benchmark datasets like KITTI, Pascal and COCO. Our approach with no training samples, achieves a respectable mAP of 64.2 while performing distillation from a Resnet-18 Teacher of 73.3

READ FULL TEXT

page 17

page 18

page 20

page 21

page 23

page 24

page 27

page 28

research
05/20/2019

Zero-Shot Knowledge Distillation in Deep Networks

Knowledge distillation deals with the problem of training a smaller mode...
research
12/31/2020

Towards Zero-Shot Knowledge Distillation for Natural Language Processing

Knowledge Distillation (KD) is a common knowledge transfer algorithm use...
research
11/18/2020

Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge Distillation

Knowledge Distillation is an effective method to transfer the learning a...
research
10/25/2021

Instance-Conditional Knowledge Distillation for Object Detection

Despite the success of Knowledge Distillation (KD) on image classificati...
research
01/15/2021

Data Impressions: Mining Deep Models to Extract Samples for Data-free Applications

Pretrained deep models hold their learnt knowledge in the form of the mo...
research
08/05/2022

Task-Balanced Distillation for Object Detection

Mainstream object detectors are commonly constituted of two sub-tasks, i...
research
01/09/2022

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

Data-Free Knowledge Distillation (KD) allows knowledge transfer from a t...

Please sign up or login with your details

Forgot password? Click here to reset