Log In Sign Up

MAD-VAE: Manifold Awareness Defense Variational Autoencoder

by   Frederick Morlock, et al.

Although deep generative models such as Defense-GAN and Defense-VAE have made significant progress in terms of adversarial defenses of image classification neural networks, several methods have been found to circumvent these defenses. Based on Defense-VAE, in our research we introduce several methods to improve the robustness of defense models. The methods introduced in this paper are straight forward yet show promise over the vanilla Defense-VAE. With extensive experiments on MNIST data set, we have demonstrated the effectiveness of our algorithms against different attacks. Our experiments also include attacks on the latent space of the defensive model. We also discuss the applicability of existing adversarial latent space attacks as they may have a significant flaw.


Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks

Deep neural networks (DNNs) have been enormously successful across a var...

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...

Adversarial Attacks and Defenses for Speaker Identification Systems

Research in automatic speaker recognition (SR) has been undertaken for s...

Adversarial examples for generative models

We explore methods of producing adversarial examples on deep generative ...

Generating Out of Distribution Adversarial Attack using Latent Space Poisoning

Traditional adversarial attacks rely upon the perturbations generated by...

A Geometric Perspective on Variational Autoencoders

This paper introduces a new interpretation of the Variational Autoencode...

Learning a Representation Map for Robot Navigation using Deep Variational Autoencoder

The aim of this work is to use Variational Autoencoder (VAE) to learn a ...