Detecting Adversarial Perturbations with Saliency

03/23/2018
by   Chiliang Zhang, et al.
0

In this paper we propose a novel method for detecting adversarial examples by training a binary classifier with both origin data and saliency data. In the case of image classification model, saliency simply explain how the model make decisions by identifying significant pixels for prediction. A model shows wrong classification output always learns wrong features and shows wrong saliency as well. Our approach shows good performance on detecting adversarial perturbations. We quantitatively evaluate generalization ability of the detector, showing that detectors trained with strong adversaries perform well on weak adversaries.

READ FULL TEXT

page 3

page 4

research
02/14/2017

On Detecting Adversarial Perturbations

Machine learning and deep learning in particular has advanced tremendous...
research
08/01/2016

Early Methods for Detecting Adversarial Images

Many machine learning classifiers are vulnerable to adversarial perturba...
research
12/21/2017

ReabsNet: Detecting and Revising Adversarial Examples

Though deep neural network has hit a huge success in recent studies and ...
research
07/11/2018

With Friends Like These, Who Needs Adversaries?

The vulnerability of deep image classification networks to adversarial a...
research
10/15/2020

Adversarial Images through Stega Glasses

This paper explores the connection between steganography and adversarial...
research
06/28/2023

Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?

Deep Neural Networks are powerful tools to understand complex patterns a...
research
05/05/2020

Reproduction of Lateral Inhibition-Inspired Convolutional Neural Network for Visual Attention and Saliency Detection

In recent years, neural networks have continued to flourish, achieving h...

Please sign up or login with your details

Forgot password? Click here to reset