ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

02/10/2020
by   Qing Guo, et al.
6

Deep neural networks are vulnerable to noise-based adversarial examples, which can mislead the networks by adding random-like noise. However, such examples are hardly found in the real world and easily perceived when thumping noises are used to keep their high transferability across different models. In this paper, we identify a new attacking method termed motion-based adversarial blur attack (ABBA) that can generate visually natural motion-blurred adversarial examples even with relatively high perturbation, allowing much better transferability than noise-based methods. To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights. To generate visually more natural and plausible examples, we further propose the saliency-regularized adversarial kernel prediction where the salient region serves as a moving object, and the predicted kernel is regularized to achieve naturally visual effects. Besides, the attack can be further enhanced by adaptively tuning the translations of object and background. Extensive experimental results on the NeurIPS'17 adversarial competition dataset validate the effectiveness of ABBA by considering various kernel sizes, translations, and regions. Furthermore, we study the effects of state-of-the-art GAN-based deblurring mechanisms to our methods.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
04/14/2023

Generating Adversarial Examples with Better Transferability via Masking Unimportant Parameters of Surrogate Model

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
09/13/2017

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

Recent studies have highlighted the vulnerability of deep neural network...
research
07/14/2020

Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack

Image denoising techniques have been widely employed in multimedia devic...
research
08/25/2021

Improving Visual Quality of Unrestricted Adversarial Examples with Wavelet-VAE

Traditional adversarial examples are typically generated by adding pertu...
research
04/26/2022

Boosting Adversarial Transferability of MLP-Mixer

The security of models based on new architectures such as MLP-Mixer and ...
research
12/09/2018

Learning Transferable Adversarial Examples via Ghost Networks

The recent development of adversarial attack has proven that ensemble-ba...
research
09/19/2020

Making Images Undiscoverable from Co-Saliency Detection

In recent years, co-saliency object detection (CoSOD) has achieved signi...

Please sign up or login with your details

Forgot password? Click here to reset