Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm

08/25/2022
by   Xinyi Wang, et al.
1

Widely used deep learning models are found to have poor robustness. Little noises can fool state-of-the-art models into making incorrect predictions. While there is a great deal of high-performance attack generation methods, most of them directly add perturbations to original data and measure them using L_p norms; this can break the major structure of data, thus, creating invalid attacks. In this paper, we propose a black-box attack, which, instead of modifying original data, modifies latent features of data extracted by an autoencoder; then, we measure noises in semantic space to protect the semantics of data. We trained autoencoders on MNIST and CIFAR-10 datasets and found optimal adversarial perturbations using a genetic algorithm. Our approach achieved a 100 datasets with less perturbation than FGSM.

READ FULL TEXT

page 5

page 6

page 7

research
08/05/2020

Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples

Recent research has demonstrated that adding some imperceptible perturba...
research
04/08/2023

Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack

Deep learning models can be fooled by small l_p-norm adversarial perturb...
research
05/01/2019

POBA-GA: Perturbation Optimized Black-Box Adversarial Attacks via Genetic Algorithm

Most deep learning models are easily vulnerable to adversarial attacks. ...
research
12/10/2021

Learning to Learn Transferable Attack

Transfer adversarial attack is a non-trivial black-box adversarial attac...
research
06/12/2017

Interactive Shape Perturbation

We present a web application for the procedural generation of perturbati...
research
12/03/2020

Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing

Adversaries are capable of adding perturbations to an image to fool mach...
research
03/04/2020

Double Backpropagation for Training Autoencoders against Adversarial Attack

Deep learning, as widely known, is vulnerable to adversarial samples. Th...

Please sign up or login with your details

Forgot password? Click here to reset