AdvDrop: Adversarial Attack to DNNs by Dropping Information

08/20/2021
by   Ranjie Duan, et al.
3

Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e.g. cartoon. However, in terms of visual perception of Deep Neural Networks (DNNs), the ability for recognizing abstract objects (visual objects with lost information) is still a challenge. In this work, we investigate this issue from an adversarial viewpoint: will the performance of DNNs decrease even for the images only losing a little information? Towards this end, we propose a novel adversarial attack, named AdvDrop, which crafts adversarial examples by dropping existing information of images. Previously, most adversarial attacks add extra disturbing information on clean images explicitly. Opposite to previous works, our proposed work explores the adversarial robustness of DNN models in a novel perspective by dropping imperceptible details to craft adversarial examples. We demonstrate the effectiveness of AdvDrop by extensive experiments, and show that this new type of adversarial examples is more difficult to be defended by current defense systems.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 9

research
03/13/2021

Learning Defense Transformers for Counterattacking Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples with ...
research
03/13/2021

Internal Wasserstein Distance for Adversarial Attack and Defense

Deep neural networks (DNNs) are vulnerable to adversarial examples that ...
research
02/25/2023

Scalable Attribution of Adversarial Attacks via Multi-Task Learning

Deep neural networks (DNNs) can be easily fooled by adversarial attacks ...
research
03/03/2018

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

Crafting adversarial examples has become an important technique to evalu...
research
06/11/2021

Adversarial Robustness through the Lens of Causality

The adversarial vulnerability of deep neural networks has attracted sign...
research
08/31/2023

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

This paper addresses the tradeoff between standard accuracy on clean exa...
research
06/06/2023

Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters

Adversarial attacks have been proven to be potential threats to Deep Neu...

Please sign up or login with your details

Forgot password? Click here to reset