Attention, Please! Adversarial Defense via Attention Rectification and Preservation

11/24/2018
by   Shangxi Wu, et al.
0

This study provides a new understanding of the adversarial attack problem by examining the correlation between adversarial attack and visual attention change. In particular, we observed that: (1) images with incomplete attention regions are more vulnerable to adversarial attacks; and (2) successful adversarial attacks lead to deviated and scattered attention map. Accordingly, an attention-based adversarial defense framework is designed to simultaneously rectify the attention map for prediction and preserve the attention area between adversarial and original images. The problem of adding iteratively attacked samples is also discussed in the context of visual attention change. We hope the attention-related data analysis and defense solution in this study will shed some light on the mechanism behind the adversarial attack and also facilitate future adversarial defense/attack model design.

READ FULL TEXT

page 1

page 2

page 3

page 5

page 6

page 7

page 9

page 12

research
02/18/2020

Deflecting Adversarial Attacks

There has been an ongoing cycle where stronger defenses against adversar...
research
01/09/2023

On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense

Under adversarial attacks, time series regression and classification are...
research
10/25/2020

Attack Agnostic Adversarial Defense via Visual Imperceptible Bound

The high susceptibility of deep learning algorithms against structured a...
research
08/04/2021

On the Robustness of Domain Adaption to Adversarial Attacks

State-of-the-art deep neural networks (DNNs) have been proved to have ex...
research
08/20/2021

Detecting and Segmenting Adversarial Graphics Patterns from Images

Adversarial attacks pose a substantial threat to computer vision system ...
research
02/13/2020

Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks

Convolutional neural networks are vulnerable to small ℓ^p adversarial at...
research
05/24/2023

Relating Implicit Bias and Adversarial Attacks through Intrinsic Dimension

Despite their impressive performance in classification, neural networks ...

Please sign up or login with your details

Forgot password? Click here to reset