D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack

06/12/2020
by   Qiuling Xu, et al.
0

We propose a novel technique that can generate natural-looking adversarial examples by bounding the variations induced for internal activation values in some deep layer(s), through a distribution quantile bound and a polynomial barrier loss function. By bounding model internals instead of individual pixels, our attack admits perturbations closely coupled with the existing features of the original input, allowing the generated examples to be natural-looking while having diverse and often substantial pixel distances from the original input. Enforcing per-neuron distribution quantile bounds allows addressing the non-uniformity of internal activation values. Our evaluation on ImageNet and five different model architecture demonstrates that our attack is quite effective. Compared to the state-of-the-art pixel space attack, semantic attack, and feature space attack, our attack can achieve the same attack success/confidence level while having much more natural-looking adversarial perturbations. These perturbations piggy-back on existing local features and do not have any fixed pixel bounds.

READ FULL TEXT

page 2

page 6

page 13

page 15

page 17

research
04/17/2019

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

Deep neural networks have been shown to exhibit an intriguing vulnerabil...
research
01/01/2023

ExploreADV: Towards exploratory attack for Neural Networks

Although deep learning has made remarkable progress in processing variou...
research
03/24/2017

Adversarial Examples for Semantic Segmentation and Object Detection

It has been well demonstrated that adversarial examples, i.e., natural i...
research
05/18/2020

Universalization of any adversarial attack using very few test examples

Deep learning models are known to be vulnerable not only to input-depend...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
10/26/2021

Semantic Host-free Trojan Attack

In this paper, we propose a novel host-free Trojan attack with triggers ...

Please sign up or login with your details

Forgot password? Click here to reset