Delving into Data: Effectively Substitute Training for Black-box Attack

04/26/2021
by   Wenxuan Wang, et al.
3

Deep models have shown their vulnerability when processing adversarial samples. As for the black-box attack, without access to the architecture and weights of the attacked model, training a substitute model for adversarial attacks has attracted wide attention. Previous substitute training approaches focus on stealing the knowledge of the target model based on real training data or synthetic data, without exploring what kind of data can further improve the transferability between the substitute and target models. In this paper, we propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process. More specifically, a diverse data generation module is proposed to synthesize large-scale data with wide distribution. And adversarial substitute training strategy is introduced to focus on the data distributed near the decision boundary. The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack. Extensive experiments demonstrate the efficacy of our method against state-of-the-art competitors under non-target and target attack settings. Detailed visualization and analysis are also provided to help understand the advantage of our method.

READ FULL TEXT
research
04/03/2022

DST: Dynamic Substitute Training for Data-free Black-box Attack

With the wide applications of deep neural network models in various comp...
research
11/21/2021

Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability

The black-box adversarial attack has attracted impressive attention for ...
research
07/06/2020

Black-box Adversarial Example Generation with Normalizing Flows

Deep neural network classifiers suffer from adversarial vulnerability: w...
research
07/24/2023

Data-free Black-box Attack based on Diffusion Model

Since the training data for the target model in a data-free black-box at...
research
05/19/2022

Transferable Physical Attack against Object Detection with Separable Attention

Transferable adversarial attack is always in the spotlight since deep le...
research
09/14/2021

PETGEN: Personalized Text Generation Attack on Deep Sequence Embedding-based Classification Models

What should a malicious user write next to fool a detection model? Ident...
research
10/18/2022

Transferable Unlearnable Examples

With more people publishing their personal data online, unauthorized dat...

Please sign up or login with your details

Forgot password? Click here to reset