Data-Free Adversarial Perturbations for Practical Black-Box Attack

03/03/2020
by   ZhaoXin Huan, et al.
0

Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, the fooling ability of adversarial perturbations is only applicable when training data are accessible. In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution. In the practical setting of a black-box attack scenario where attackers do not have access to target models and training data, our method achieves high fooling rates on target models and outperforms other universal adversarial perturbation methods. Our method empirically shows that current deep learning models are still at risk even when the attackers do not have access to training data.

READ FULL TEXT
research
11/20/2018

Intermediate Level Adversarial Attack for Enhanced Transferability

Neural networks are vulnerable to adversarial examples, malicious inputs...
research
01/24/2018

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

Machine learning models are susceptible to adversarial perturbations: sm...
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
05/28/2019

Cross-Domain Transferability of Adversarial Perturbations

Adversarial examples reveal the blind spots of deep neural networks (DNN...
research
12/19/2016

Simple Black-Box Adversarial Perturbations for Deep Networks

Deep neural networks are powerful and popular learning models that achie...
research
02/18/2020

On the Matrix-Free Generation of Adversarial Perturbations for Black-Box Attacks

In general, adversarial perturbations superimposed on inputs are realist...
research
05/02/2021

Who's Afraid of Adversarial Transferability?

Adversarial transferability, namely the ability of adversarial perturbat...

Please sign up or login with your details

Forgot password? Click here to reset