Manifold Preserving Adversarial Learning

03/10/2019
by   Ousmane Amadou Dia, et al.
14

How to generate semantically meaningful and structurally sound adversarial examples? We propose to answer this question by restricting the search for adversaries in the true data manifold. To this end, we introduce a stochastic variational inference method to learn the data manifold, in the presence of continuous latent variables with intractable posterior distributions, without requiring an a priori form for the data underlying distribution. We then propose a manifold perturbation strategy that ensures the cases we perturb remain in the manifold of the original examples and thereby generate the adversaries. We evaluate our approach on a number of image and text datasets. Our results show the effectiveness of our approach in producing coherent, and realistic-looking adversaries that can evade strong defenses known to be resilient to traditional adversarial attacks

READ FULL TEXT

page 9

page 16

page 17

page 18

research
04/05/2020

Approximate Manifold Defense Against Multiple Adversarial Perturbations

Existing defenses against adversarial attacks are typically tailored to ...
research
10/06/2021

Reversible adversarial examples against local visual perturbation

Recently, studies have indicated that adversarial attacks pose a threat ...
research
11/04/2018

Adversarial Gain

Adversarial examples can be defined as inputs to a model which induce a ...
research
04/18/2023

Masked Language Model Based Textual Adversarial Example Detection

Adversarial attacks are a serious threat to the reliable deployment of m...
research
09/07/2019

On Need for Topology-Aware Generative Models for Manifold-Based Defenses

ML algorithms or models, especially deep neural networks (DNNs), have sh...
research
04/18/2019

Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Numerous eff...
research
02/26/2018

Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples

We propose a retrieval-augmented convolutional network and propose to tr...

Please sign up or login with your details

Forgot password? Click here to reset