-
Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients
We focus our attention on the problem of generating adversarial perturba...
read it
-
Defending against Adversarial Attacks through Resilient Feature Regeneration
Deep neural network (DNN) predictions have been shown to be vulnerable t...
read it
-
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations
We study the effect of adversarial perturbations of images on the estima...
read it
-
Adversarial Neural Pruning
It is well known that neural networks are susceptible to adversarial per...
read it
-
On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions
Data-agnostic quasi-imperceptible perturbations on inputs can severely d...
read it
-
Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions
There is a growing body of literature showing that deep neural networks ...
read it
-
Manifold Assumption and Defenses Against Adversarial Perturbations
In the adversarial perturbation problem of neural networks, an adversary...
read it
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
The linear and non-flexible nature of deep convolutional models makes them vulnerable to carefully crafted adversarial perturbations. To tackle this problem, we propose a non-linear radial basis convolutional feature mapping by learning a Mahalanobis-like distance function. Our method then maps the convolutional features onto a linearly well-separated manifold, which prevents small adversarial perturbations from forcing a sample to cross the decision boundary. We test the proposed method on three publicly available image classification and segmentation datasets namely, MNIST, ISBI ISIC 2017 skin lesion segmentation, and NIH Chest X-Ray-14. We evaluate the robustness of our method to different gradient (targeted and untargeted) and non-gradient based attacks and compare it to several non-gradient masking defense strategies. Our results demonstrate that the proposed method can increase the resilience of deep convolutional neural networks to adversarial perturbations without accuracy drop on clean data.
READ FULL TEXT
Comments
There are no comments yet.