Variational Inference with Latent Space Quantization for Adversarial Resilience

03/24/2019
by   Vinay Kyatham, et al.
0

Despite their tremendous success in modelling high-dimensional data manifolds, deep neural networks suffer from the threat of adversarial attacks - Existence of perceptually valid input-like samples obtained through careful perturbations that leads to degradation in the performance of underlying model. Major concerns with existing defense mechanisms include non-generalizability across different attacks, models and large inference time. In this paper, we propose a generalized defense mechanism capitalizing on the expressive power of regularized latent space based generative models. We design an adversarial filter, devoid of access to classifier and adversaries, which makes it usable in tandem with any classifier. The basic idea is to learn a Lipschitz constrained mapping from the data manifold, incorporating adversarial perturbations, to a quantized latent space and re-map it to the true data manifold. Specifically, we simultaneously auto-encode the data manifold and its perturbations implicitly through the perturbations of the regularized and quantized generative latent space, realized using variational inference. We demonstrate the efficacy of the proposed formulation in providing the resilience against multiple attack types (Black and white box) and methods, while being almost real-time. Our experiments show that the proposed method surpasses the state-of-the-art techniques in several cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2020

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Adversarial training is a popular defense strategy against attack threat...
research
08/18/2021

Semantic Perturbations with Normalizing Flows for Improved Generalization

Data augmentation is a widely adopted technique for avoiding overfitting...
research
02/23/2018

Adversarial vulnerability for any classifier

Despite achieving impressive and often superhuman performance on multipl...
research
06/26/2020

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...
research
12/09/2020

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

Traditional adversarial attacks rely upon the perturbations generated by...
research
07/16/2021

ScRAE: Deterministic Regularized Autoencoders with Flexible Priors for Clustering Single-cell Gene Expression Data

Clustering single-cell RNA sequence (scRNA-seq) data poses statistical a...
research
05/21/2018

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Deep neural networks have been demonstrated to be vulnerable to adversar...

Please sign up or login with your details

Forgot password? Click here to reset