Adversarial Attacks against Deep Saliency Models

04/02/2019
by   Zhaohui Che, et al.
0

Currently, a plethora of saliency models based on deep neural networks have led great breakthroughs in many complex high-level vision tasks (e.g. scene description, object detection). The robustness of these models, however, has not yet been studied. In this paper, we propose a sparse feature-space adversarial attack method against deep saliency models for the first time. The proposed attack only requires a part of the model information, and is able to generate a sparser and more insidious adversarial perturbation, compared to traditional image-space attacks. These adversarial perturbations are so subtle that a human observer cannot notice their presences, but the model outputs will be revolutionized. This phenomenon raises security threats to deep saliency models in practical applications. We also explore some intriguing properties of the feature-space attack, e.g. 1) the hidden layers with bigger receptive fields generate sparser perturbations, 2) the deeper hidden layers achieve higher attack success rates, and 3) different loss functions and different attacked layers will result in diverse perturbations. Experiments indicate that the proposed method is able to successfully attack different model architectures across various image scenes.

READ FULL TEXT

page 1

page 2

page 4

page 8

research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
10/13/2021

Identification of Attack-Specific Signatures in Adversarial Examples

The adversarial attack literature contains a myriad of algorithms for cr...
research
05/27/2023

Adversarial Attack On Yolov5 For Traffic And Road Sign Detection

This paper implements and investigates popular adversarial attacks on th...
research
01/21/2020

Generate High-Resolution Adversarial Samples by Identifying Effective Features

As the prevalence of deep learning in computer vision, adversarial sampl...
research
09/19/2020

Making Images Undiscoverable from Co-Saliency Detection

In recent years, co-saliency object detection (CoSOD) has achieved signi...
research
08/23/2018

Maximal Jacobian-based Saliency Map Attack

The Jacobian-based Saliency Map Attack is a family of adversarial attack...
research
06/20/2021

Attack to Fool and Explain Deep Networks

Deep visual models are susceptible to adversarial perturbations to input...

Please sign up or login with your details

Forgot password? Click here to reset