Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

07/03/2021
by   Yajie Wang, et al.
6

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Adversarial examples are malicious images with visually imperceptible perturbations. While these carefully crafted perturbations restricted with tight norm bounds are small, they are still easily perceivable by humans. These perturbations also have limited success rates when attacking black-box models or models with defenses like noise reduction filters. To solve these problems, we propose Demiguise Attack, crafting “unrestricted” perturbations with Perceptual Similarity. Specifically, we can create powerful and photorealistic adversarial examples by manipulating semantic information based on Perceptual Similarity. Adversarial examples we generate are friendly to the human visual system (HVS), although the perturbations are of large magnitudes. We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility. Extensive experiments show that the proposed method not only outperforms various state-of-the-art attacks in terms of fooling rate, transferability, and robustness against defenses but can also improve attacks effectively. In addition, we also notice that our implementation can simulate illumination and contrast changes that occur in real-world scenarios, which will contribute to exposing the blind spots of DNNs.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

research
01/08/2018

Generating adversarial examples with adversarial networks

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
04/12/2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation

Machine learning, especially deep learning, is widely applied to a range...
research
05/19/2020

Patch Attack for Automatic Check-out

Adversarial examples are inputs with imperceptible perturbations that ea...
research
01/23/2021

Error Diffusion Halftoning Against Adversarial Examples

Adversarial examples contain carefully crafted perturbations that can fo...
research
03/10/2022

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Current adversarial attack research reveals the vulnerability of learnin...
research
11/06/2018

SparseFool: a few pixels make a big difference

Deep Neural Networks have achieved extraordinary results on image classi...
research
09/02/2020

Perceptual Deep Neural Networks: Adversarial Robustness through Input Recreation

Adversarial examples have shown that albeit highly accurate, models lear...

Please sign up or login with your details

Forgot password? Click here to reset