Robust and Accurate Object Detection via Self-Knowledge Distillation

11/14/2021
by   Weipeng Xu, et al.
18

Object detection has achieved promising performance on clean datasets, but how to achieve better tradeoff between the adversarial robustness and clean precision is still under-explored. Adversarial training is the mainstream method to improve robustness, but most of the works will sacrifice clean precision to gain robustness than standard training. In this paper, we propose Unified Decoupled Feature Alignment (UDFA), a novel fine-tuning paradigm which achieves better performance than existing methods, by fully exploring the combination between self-knowledge distillation and adversarial training for object detection. We first use decoupled fore/back-ground features to construct self-knowledge distillation branch between clean feature representation from pretrained detector (served as teacher) and adversarial feature representation from student detector. Then we explore the self-knowledge distillation from a new angle by decoupling original branch into a self-supervised learning branch and a new self-knowledge distillation branch. With extensive experiments on the PASCAL-VOC and MS-COCO benchmarks, the evaluation results show that UDFA can surpass the standard training and state-of-the-art adversarial training methods for object detection. For example, compared with teacher detector, our approach on GFLV2 with ResNet-50 improves clean precision by 2.2 AP on PASCAL-VOC; compared with SOTA adversarial training methods, our approach improves clean precision by 1.6 AP, while improving adversarial robustness by 0.5 AP. Our code will be available at https://github.com/grispeut/udfa.

READ FULL TEXT

page 1

page 3

page 13

research
12/08/2020

Using Feature Alignment can Improve Clean Average Precision and Adversarial Robustness in Object Detection

The 2D object detection in clean images has been a well studied topic, b...
research
06/20/2019

GAN-Knowledge Distillation for one-stage Object Detection

Convolutional neural networks have a significant improvement in the accu...
research
03/10/2022

Prediction-Guided Distillation for Dense Object Detection

Real-world object detection models should be cheap and accurate. Knowled...
research
03/23/2022

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Recent studies in deepfake detection have yielded promising results when...
research
09/20/2022

Rethinking Data Augmentation in Knowledge Distillation for Object Detection

Knowledge distillation (KD) has shown its effectiveness for object detec...
research
03/09/2023

Smooth and Stepwise Self-Distillation for Object Detection

Distilling the structured information captured in feature maps has contr...
research
03/21/2023

Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data

Large-scale deep learning models have achieved great performance based o...

Please sign up or login with your details

Forgot password? Click here to reset