Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

12/09/2020
by   Ujjwal Upadhyay, et al.
9

Traditional adversarial attacks rely upon the perturbations generated by gradients from the network which are generally safeguarded by gradient guided search to provide an adversarial counterpart to the network. In this paper, we propose a novel mechanism of generating adversarial examples where the actual image is not corrupted rather its latent space representation is utilized to tamper with the inherent structure of the image while maintaining the perceptual quality intact and to act as legitimate data samples. As opposed to gradient-based attacks, the latent space poisoning exploits the inclination of classifiers to model the independent and identical distribution of the training dataset and tricks it by producing out of distribution samples. We train a disentangled variational autoencoder (beta-VAE) to model the data in latent space and then we add noise perturbations using a class-conditioned distribution function to the latent space under the constraint that it is misclassified to the target label. Our empirical results on MNIST, SVHN, and CelebA dataset validate that the generated adversarial examples can easily fool robust l_0, l_2, l_inf norm classifiers designed using provably robust defense mechanisms.

READ FULL TEXT
research
04/10/2023

Generating Adversarial Attacks in the Latent Space

Adversarial attacks in the input (pixel) space typically incorporate noi...
research
11/08/2017

LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined...
research
05/22/2023

Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space

Adversarial attacks against Deep Neural Networks(DNN) have been a crutia...
research
10/31/2020

MAD-VAE: Manifold Awareness Defense Variational Autoencoder

Although deep generative models such as Defense-GAN and Defense-VAE have...
research
08/25/2021

Adversarially Robust One-class Novelty Detection

One-class novelty detectors are trained with examples of a particular cl...
research
03/24/2019

Variational Inference with Latent Space Quantization for Adversarial Resilience

Despite their tremendous success in modelling high-dimensional data mani...
research
05/25/2023

Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability

Neural networks are known to be susceptible to adversarial samples: smal...

Please sign up or login with your details

Forgot password? Click here to reset