-
On Detecting Adversarial Perturbations
Machine learning and deep learning in particular has advanced tremendous...
read it
-
Early Methods for Detecting Adversarial Images
Many machine learning classifiers are vulnerable to adversarial perturba...
read it
-
ReabsNet: Detecting and Revising Adversarial Examples
Though deep neural network has hit a huge success in recent studies and ...
read it
-
With Friends Like These, Who Needs Adversaries?
The vulnerability of deep image classification networks to adversarial a...
read it
-
Adversarial Images through Stega Glasses
This paper explores the connection between steganography and adversarial...
read it
-
Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples
Recent work (Ilyas et al, 2019) suggests that adversarial examples are f...
read it
-
The Manifold Assumption and Defenses Against Adversarial Perturbations
In the adversarial-perturbation problem of neural networks, an adversary...
read it
Detecting Adversarial Perturbations with Saliency
In this paper we propose a novel method for detecting adversarial examples by training a binary classifier with both origin data and saliency data. In the case of image classification model, saliency simply explain how the model make decisions by identifying significant pixels for prediction. A model shows wrong classification output always learns wrong features and shows wrong saliency as well. Our approach shows good performance on detecting adversarial perturbations. We quantitatively evaluate generalization ability of the detector, showing that detectors trained with strong adversaries perform well on weak adversaries.
READ FULL TEXT
Comments
There are no comments yet.