Spatially Correlated Patterns in Adversarial Images

11/21/2020
by   Nandish Chattopadhyay, et al.
0

Adversarial attacks have proved to be the major impediment in the progress on research towards reliable machine learning solutions. Carefully crafted perturbations, imperceptible to human vision, can be added to images to force misclassification by an otherwise high performing neural network. To have a better understanding of the key contributors of such structured attacks, we searched for and studied spatially co-located patterns in the distribution of pixels in the input space. In this paper, we propose a framework for segregating and isolating regions within an input image which are particularly critical towards either classification (during inference), or adversarial vulnerability or both. We assert that during inference, the trained model looks at a specific region in the image, which we call Region of Importance (RoI); and the attacker looks at a region to alter/modify, which we call Region of Attack (RoA). The success of this approach could also be used to design a post-hoc adversarial defence method, as illustrated by our observations. This uses the notion of blocking out (we call neutralizing) that region of the image which is highly vulnerable to adversarial attacks but is not important for the task of classification. We establish the theoretical setup for formalising the process of segregation, isolation and neutralization and substantiate it through empirical analysis on standard benchmarking datasets. The findings strongly indicate that mapping features into the input space preserves the significant patterns typically observed in the feature-space while adding major interpretability and therefore simplifies potential defensive mechanisms.

READ FULL TEXT

page 1

page 6

page 7

research
10/13/2020

Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images

Deep neural network image classifiers are reported to be susceptible to ...
research
05/22/2023

Uncertainty-based Detection of Adversarial Attacks in Semantic Segmentation

State-of-the-art deep neural networks have proven to be highly powerful ...
research
03/31/2022

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond

Rain removal aims to remove rain streaks from images/videos and reduce t...
research
05/31/2019

Real-Time Adversarial Attacks

In recent years, many efforts have demonstrated that modern machine lear...
research
12/17/2020

A Hierarchical Feature Constraint to Camouflage Medical Adversarial Attacks

Deep neural networks (DNNs) for medical images are extremely vulnerable ...
research
06/06/2019

Should Adversarial Attacks Use Pixel p-Norm?

Adversarial attacks aim to confound machine learning systems, while rema...
research
12/30/2020

Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial Attacks for Online Visual Object Trackers

In recent years, the trackers based on Siamese networks have emerged as ...

Please sign up or login with your details

Forgot password? Click here to reset