Amicable Aid: Turning Adversarial Attack to Benefit Classification

12/09/2021
by   Juyeop Kim, et al.
0

While adversarial attacks on deep image classification models pose serious security concerns in practice, this paper suggests a novel paradigm where the concept of adversarial attacks can benefit classification performance, which we call amicable aid. We show that by taking the opposite search direction of perturbation, an image can be converted to another yielding higher confidence by the classification model and even a wrongly classified image can be made to be correctly classified. Furthermore, with a large amount of perturbation, an image can be made unrecognizable by human eyes, while it is correctly recognized by the model. The mechanism of the amicable aid is explained in the viewpoint of the underlying natural image manifold. We also consider universal amicable perturbations, i.e., a fixed perturbation can be applied to multiple images to improve their classification results. While it is challenging to find such perturbations, we show that making the decision boundary as perpendicular to the image manifold as possible via training with modified data is effective to obtain a model for which universal amicable perturbations are more easily found. Finally, we discuss several application scenarios where the amicable aid can be useful, including secure image communication, privacy-preserving image communication, and protection against adversarial attacks.

READ FULL TEXT

page 2

page 6

page 15

research
10/15/2020

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...
research
10/13/2021

Well-classified Examples are Underestimated in Classification with Deep Neural Networks

The conventional wisdom behind learning deep classification models is to...
research
03/07/2021

Universal Adversarial Perturbations and Image Spam Classifiers

As the name suggests, image spam is spam email that has been embedded in...
research
01/27/2021

Meta Adversarial Training

Recently demonstrated physical-world adversarial attacks have exposed vu...
research
11/24/2020

Towards Imperceptible Universal Attacks on Texture Recognition

Although deep neural networks (DNNs) have been shown to be susceptible t...
research
02/22/2022

Universal adversarial perturbation for remote sensing images

Recently, with the application of deep learning in the remote sensing im...
research
06/14/2021

Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

Deep learning models are routinely employed in computational pathology (...

Please sign up or login with your details

Forgot password? Click here to reset