Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation

03/09/2022
by   Qilong Zhang, et al.
1

In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention. Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and computational overhead of training is expensive. In this paper, we move a step forward and show the existence of a training-free adversarial perturbation under the no-box threat model, which can be successfully used to attack different DNNs in real-time. Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is manipulated by suppression of the original HFC and adding of noisy HFC. We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of 98.13% on average, which outperforms state-of-the-art no-box attacks by 29.39%. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks.

READ FULL TEXT

page 2

page 5

page 21

page 22

page 23

page 24

research
12/27/2017

Exploring the Space of Black-box Attacks on Deep Neural Networks

Existing black-box attacks on deep neural networks (DNNs) so far have la...
research
12/10/2021

Cross-Modal Transferable Adversarial Attacks from Images to Videos

Recent studies have shown that adversarial examples hand-crafted on one ...
research
10/06/2020

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

Recent increases in the computational demands of deep neural networks (D...
research
08/08/2022

PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework on NLP Applications

Deep Neural Networks (DNNs) have been shown to be susceptible to Trojan ...
research
02/20/2023

An Incremental Gray-box Physical Adversarial Attack on Neural Network Training

Neural networks have demonstrated remarkable success in learning and sol...
research
06/10/2021

Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm

Sparse adversarial attacks can fool deep neural networks (DNNs) by only ...
research
05/10/2023

Stealthy Low-frequency Backdoor Attack against Deep Neural Networks

Deep neural networks (DNNs) have gain its popularity in various scenario...

Please sign up or login with your details

Forgot password? Click here to reset