Model Robustness Meets Data Privacy: Adversarial Robustness Distillation without Original Data

03/21/2023
by   Yuzheng Wang, et al.
0

Large-scale deep learning models have achieved great performance based on large-scale datasets. Moreover, the existing Adversarial Training (AT) can further improve the robustness of these large models. However, these large models are difficult to deploy to mobile devices, and the effect of AT on small models is very limited. In addition, the data privacy issue (e.g., face data and diagnosis report) may lead to the original data being unavailable, which relies on data-free knowledge distillation technology for training. To tackle these issues, we propose a challenging novel task called Data-Free Adversarial Robustness Distillation (DFARD), which tries to train small, easily deployable, robust models without relying on the original data. We find the combination of existing techniques resulted in degraded model performance due to fixed training objectives and scarce information content. First, an interactive strategy is designed for more efficient knowledge transfer to find more suitable training objectives at each epoch. Then, we explore an adaptive balance method to suppress information loss and obtain more data information than previous methods. Experiments show that our method improves baseline performance on the novel task.

READ FULL TEXT

page 5

page 8

research
08/18/2021

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

Adversarial training is one effective approach for training robust deep ...
research
03/14/2022

On the benefits of knowledge distillation for adversarial robustness

Knowledge distillation is normally used to compress a big network, or te...
research
12/10/2020

Large-Scale Generative Data-Free Distillation

Knowledge distillation is one of the most popular and effective techniqu...
research
02/23/2021

Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation

Knowledge distillation refers to a technique of transferring the knowled...
research
11/14/2021

Robust and Accurate Object Detection via Self-Knowledge Distillation

Object detection has achieved promising performance on clean datasets, b...
research
09/21/2019

Positive-Unlabeled Compression on the Cloud

Many attempts have been done to extend the great success of convolutiona...
research
10/04/2022

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...

Please sign up or login with your details

Forgot password? Click here to reset