Double Backpropagation for Training Autoencoders against Adversarial Attack

03/04/2020
by   Chengjin Sun, et al.
0

Deep learning, as widely known, is vulnerable to adversarial samples. This paper focuses on the adversarial attack on autoencoders. Safety of the autoencoders (AEs) is important because they are widely used as a compression scheme for data storage and transmission, however, the current autoencoders are easily attacked, i.e., one can slightly modify an input but has totally different codes. The vulnerability is rooted the sensitivity of the autoencoders and to enhance the robustness, we propose to adopt double backpropagation (DBP) to secure autoencoder such as VAE and DRAW. We restrict the gradient from the reconstruction image to the original one so that the autoencoder is not sensitive to trivial perturbation produced by the adversarial attack. After smoothing the gradient by DBP, we further smooth the label by Gaussian Mixture Model (GMM), aiming for accurate and robust classification. We demonstrate in MNIST, CelebA, SVHN that our method leads to a robust autoencoder resistant to attack and a robust classifier able for image transition and immune to adversarial attack if combined with GMM.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 8

research
12/01/2016

Adversarial Images for Variational Autoencoders

We investigate adversarial attacks for autoencoders. We propose a proced...
research
02/15/2021

Certifiably Robust Variational Autoencoders

We introduce an approach for training Variational Autoencoders (VAEs) th...
research
03/10/2021

Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks

In this work, we explore adversarial attacks on the Variational Autoenco...
research
02/28/2020

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

Nowadays the deep learning technology is growing faster and shows dramat...
research
05/31/2022

Semantic Autoencoder and Its Potential Usage for Adversarial Attack

Autoencoder can give rise to an appropriate latent representation of the...
research
08/25/2022

Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm

Widely used deep learning models are found to have poor robustness. Litt...
research
06/17/2020

Disrupting Deepfakes with an Adversarial Attack that Survives Training

The rapid progress in generative models and autoencoders has given rise ...

Please sign up or login with your details

Forgot password? Click here to reset