ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

02/15/2018
by   Fuxun Yu, et al.
0

With the excellent accuracy and feasibility, the Neural Networks have been widely applied into the novel intelligent applications and systems. However, with the appearance of the Adversarial Attack, the NN based system performance becomes extremely vulnerable:the image classification results can be arbitrarily misled by the adversarial examples, which are crafted images with human unperceivable pixel-level perturbation. As this raised a significant system security issue, we implemented a series of investigations on the adversarial attack in this work: We first identify an image's pixel vulnerability to the adversarial attack based on the adversarial saliency analysis. By comparing the analyzed saliency map and the adversarial perturbation distribution, we proposed a new evaluation scheme to comprehensively assess the adversarial attack precision and efficiency. Then, with a novel adversarial saliency prediction method, a fast adversarial example generation framework, namely "ASP", is proposed with significant attack efficiency improvement and dramatic computation cost reduction. Compared to the previous methods, experiments show that ASP has at most 12 times speed-up for adversarial example generation, 2 times lower perturbation rate, and high attack success rate of 87 utilized to support the data-hungry NN adversarial training. By reducing the attack success rate as much as 90 defense capability of NN based system to the adversarial attacks.

READ FULL TEXT

page 1

page 2

page 4

research
09/13/2021

TREATED:Towards Universal Defense against Textual Adversarial Attacks

Recent work shows that deep neural networks are vulnerable to adversaria...
research
12/15/2020

FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems

Deep neural networks (DNNs) significantly improved the accuracy of optic...
research
03/02/2019

PuVAE: A Variational Autoencoder to Purify Adversarial Examples

Deep neural networks are widely used and exhibit excellent performance i...
research
08/23/2018

Maximal Jacobian-based Saliency Map Attack

The Jacobian-based Saliency Map Attack is a family of adversarial attack...
research
01/05/2023

Silent Killer: Optimizing Backdoor Trigger Yields a Stealthy and Powerful Data Poisoning Attack

We propose a stealthy and powerful backdoor attack on neural networks ba...
research
06/14/2021

Now You See It, Now You Dont: Adversarial Vulnerabilities in Computational Pathology

Deep learning models are routinely employed in computational pathology (...
research
12/19/2019

Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples)

We propose the scheme that mitigates an adversarial perturbation ϵ on th...

Please sign up or login with your details

Forgot password? Click here to reset