Maximal Jacobian-based Saliency Map Attack

08/23/2018
by   Rey Wiyatno, et al.
0

The Jacobian-based Saliency Map Attack is a family of adversarial attack methods for fooling classification models, such as deep neural networks for image classification tasks. By saturating a few pixels in a given image to their maximum or minimum values, JSMA can cause the model to misclassify the resulting adversarial image as a specified erroneous target class. We propose two variants of JSMA, one which removes the requirement to specify a target class, and another that additionally does not need to specify whether to only increase or decrease pixel intensities. Our experiments highlight the competitive speeds and qualities of these variants when applied to datasets of hand-written digits and natural scenes.

READ FULL TEXT

page 1

page 2

page 3

research
08/16/2021

Deep adversarial attack

Target...
research
06/10/2021

Deep neural network loses attention to adversarial images

Adversarial algorithms have shown to be effective against neural network...
research
02/15/2018

ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

With the excellent accuracy and feasibility, the Neural Networks have be...
research
07/27/2018

Influence of Image Classification Accuracy on Saliency Map Estimation

Saliency map estimation in computer vision aims to estimate the location...
research
04/02/2019

Adversarial Attacks against Deep Saliency Models

Currently, a plethora of saliency models based on deep neural networks h...
research
12/20/2013

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

This paper addresses the visualisation of image classification models, l...
research
10/06/2020

Visualizing Color-wise Saliency of Black-Box Image Classification Models

Image classification based on machine learning is being commonly used. H...

Please sign up or login with your details

Forgot password? Click here to reset