Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data

by   Yuzheng Wang, et al.

Large-scale deep learning models have achieved great performance based on large-scale datasets. Moreover, the existing Adversarial Training (AT) can further improve the robustness of these large models. However, these large models are difficult to deploy to mobile devices, and the effect of AT on small models is very limited. In addition, the data privacy issue (e.g., face data and diagnosis report) may lead to the original data being unavailable, which relies on data-free knowledge distillation technology for training. To tackle these issues, we propose a challenging novel task called Data-Free Adversarial Robustness Distillation (DFARD), which tries to train small, easily deployable, robust models without relying on the original data. We find the combination of existing techniques resulted in degraded model performance due to fixed training objectives and scarce information content. First, an interactive strategy is designed for more efficient knowledge transfer to find more suitable training objectives at each epoch. Then, we explore an adaptive balance method to suppress information loss and obtain more data information than previous methods. Experiments show that our method improves baseline performance on the novel task.


page 5

page 8


Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

Adversarial training is one effective approach for training robust deep ...

On the benefits of knowledge distillation for adversarial robustness

Knowledge distillation is normally used to compress a big network, or te...

Large-Scale Generative Data-Free Distillation

Knowledge distillation is one of the most popular and effective techniqu...

Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation

Knowledge distillation refers to a technique of transferring the knowled...

Robust and Accurate Object Detection via Self-Knowledge Distillation

Object detection has achieved promising performance on clean datasets, b...

Positive-Unlabeled Compression on the Cloud

Many attempts have been done to extend the great success of convolutiona...

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...

Please sign up or login with your details

Forgot password? Click here to reset