Blurring Fools the Network – Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring

12/21/2020
by   Chenchen Zhao, et al.
11

Existing pixel-level adversarial attacks on neural networks may be deficient in real scenarios, since pixel-level changes on the data cannot be fully delivered to the neural network after camera capture and multiple image preprocessing steps. In contrast, in this paper, we argue from another perspective that gaussian blurring, a common technique of image preprocessing, can be aggressive itself in specific occasions, thus exposing the network to real-world adversarial attacks. We first propose an adversarial attack demo named peak suppression (PS) by suppressing the values of peak elements in the features of the data. Based on the blurring spirit of PS, we further apply gaussian blurring to the data, to investigate the potential influence and threats of gaussian blurring to performance of the network. Experiment results show that PS and well-designed gaussian blurring can form adversarial attacks that completely change classification results of a well-trained target network. With the strong physical significance and wide applications of gaussian blurring, the proposed approach will also be capable of conducting real world attacks.

READ FULL TEXT

page 4

page 5

page 6

research
06/21/2021

Delving into the pixels of adversarial samples

Despite extensive research into adversarial attacks, we do not know how ...
research
05/31/2023

Graph-based methods coupled with specific distributional distances for adversarial attack detection

Artificial neural networks are prone to being fooled by carefully pertur...
research
12/21/2020

Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

Convolutional neural networks (CNN) have been more and more applied in m...
research
04/30/2022

Optimizing One-pixel Black-box Adversarial Attacks

The output of Deep Neural Networks (DNN) can be altered by a small pertu...
research
08/21/2023

Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

Causal Neural Network models have shown high levels of robustness to adv...
research
06/02/2022

Adversarial RAW: Image-Scaling Attack Against Imaging Pipeline

Deep learning technologies have become the backbone for the development ...
research
06/06/2019

Should Adversarial Attacks Use Pixel p-Norm?

Adversarial attacks aim to confound machine learning systems, while rema...

Please sign up or login with your details

Forgot password? Click here to reset