Approximate Manifold Defense Against Multiple Adversarial Perturbations

04/05/2020
by   Jay Nandy, et al.
0

Existing defenses against adversarial attacks are typically tailored to a specific perturbation type. Using adversarial training to defend against multiple types of perturbation requires expensive adversarial examples from different perturbation types at each training step. In contrast, manifold-based defense incorporates a generative network to project an input sample onto the clean data manifold. This approach eliminates the need to generate expensive adversarial examples while achieving robustness against multiple perturbation types. However, the success of this approach relies on whether the generative network can capture the complete clean data manifold, which remains an open problem for complex input domain. In this work, we devise an approximate manifold defense mechanism, called RBF-CNN, for image classification. Instead of capturing the complete data manifold, we use an RBF layer to learn the density of small image patches. RBF-CNN also utilizes a reconstruction layer that mitigates any minor adversarial perturbations. Further, incorporating our proposed reconstruction process for training improves the adversarial robustness of our RBF-CNN models. Experiment results on MNIST and CIFAR-10 datasets indicate that RBF-CNN offers robustness for multiple perturbations without the need for expensive adversarial training.

READ FULL TEXT

page 1

page 3

page 4

page 6

page 7

research
10/09/2019

Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system

Adversarial training, in which a network is trained on both adversarial ...
research
12/11/2022

DISCO: Adversarial Defense with Local Implicit Functions

The problem of adversarial defenses for image classification, where the ...
research
02/20/2018

On Lyapunov exponents and adversarial perturbation

In this paper, we would like to disseminate a serendipitous discovery in...
research
03/10/2019

Manifold Preserving Adversarial Learning

How to generate semantically meaningful and structurally sound adversari...
research
12/17/2020

On the Limitations of Denoising Strategies as Adversarial Defenses

As adversarial attacks against machine learning models have raised incre...
research
12/03/2020

Towards Defending Multiple Adversarial Perturbations via Gated Batch Normalization

There is now extensive evidence demonstrating that deep neural networks ...
research
09/27/2020

Beneficial Perturbations Network for Defending Adversarial Examples

Adversarial training, in which a network is trained on both adversarial ...

Please sign up or login with your details

Forgot password? Click here to reset