A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations

03/03/2019
by   Saeid Asgari Taghanaki, et al.
0

The linear and non-flexible nature of deep convolutional models makes them vulnerable to carefully crafted adversarial perturbations. To tackle this problem, we propose a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. We test the proposed method on three publicly available image classification and segmentation datasets namely, MNIST, ISBI ISIC 2017 skin lesion segmentation, and NIH Chest X-Ray-14. We evaluate the robustness of our method to different gradient (targeted and untargeted) and non-gradient based attacks and compare it to several non-gradient masking defense strategies. Our results demonstrate that the proposed method can increase the resilience of deep convolutional neural networks to adversarial perturbations without accuracy drop on clean data.

READ FULL TEXT

page 7

page 8

research
04/12/2019

Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients

We focus our attention on the problem of generating adversarial perturba...
research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...
research
09/21/2020

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

We study the effect of adversarial perturbations of images on the estima...
research
07/09/2018

Vulnerability Analysis of Chest X-Ray Image Classification Against Adversarial Attacks

Recently, there have been several successful deep learning approaches fo...
research
06/23/2018

Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions

There is a growing body of literature showing that deep neural networks ...
research
09/11/2018

On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions

Data-agnostic quasi-imperceptible perturbations on inputs can severely d...
research
11/21/2017

Manifold Assumption and Defenses Against Adversarial Perturbations

In the adversarial perturbation problem of neural networks, an adversary...

Please sign up or login with your details

Forgot password? Click here to reset