Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images

10/13/2020
by   He Zhao, et al.
1

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier. Recently, various kinds of adversarial attack methods have been proposed, most of which focus on adding small perturbations to all of the pixels of a real image. We find that a considerable amount of the perturbations on an image generated by some widely-used attacks may contribute little in attacking a classifier. However, they usually result in a more easily detectable adversarial image by both humans and adversarial attack detection algorithms. Therefore, it is important to impose the perturbations on the most vulnerable pixels of an image that can change the predictions of classifiers more readily. With the pixel vulnerability, given an existing attack, we can make its adversarial images more realistic and less detectable with fewer perturbations but keep its attack performance the same. Moreover, the discovered vulnerability assists to get a better understanding of the weakness of deep classifiers. Derived from the information-theoretic perspective, we propose a probabilistic approach for automatically finding the pixel vulnerability of an image, which is compatible with and improves over many existing adversarial attacks.

READ FULL TEXT

page 2

page 8

research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
06/18/2021

Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective

The vulnerability of deep neural networks to adversarial examples, which...
research
07/28/2020

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

We consider the theoretical problem of designing an optimal adversarial ...
research
08/12/2021

Deep adversarial attack on target detection systems

Target detection systems identify targets by localizing their coordinate...
research
11/21/2020

Spatially Correlated Patterns in Adversarial Images

Adversarial attacks have proved to be the major impediment in the progre...
research
06/05/2023

Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning

Deep neural networks are capable of state-of-the-art performance in many...
research
10/09/2018

The Adversarial Attack and Detection under the Fisher Information Metric

Many deep learning models are vulnerable to the adversarial attack, i.e....

Please sign up or login with your details

Forgot password? Click here to reset