Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

03/01/2021
by   Jiakai Wang, et al.
0

Deep learning models are vulnerable to adversarial examples. As a more threatening type for practical deep learning systems, physical adversarial examples have received extensive research attention in recent years. However, without exploiting the intrinsic characteristics such as model-agnostic and human-specific patterns, existing works generate weak adversarial perturbations in the physical world, which fall short of attacking across different models and show visually suspicious appearance. Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, this paper proposes the Dual Attention Suppression (DAS) attack to generate visually-natural physical adversarial camouflages with strong transferability by suppressing both model and human attention. As for attacking, we generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions. Meanwhile, based on the fact that human visual attention always focuses on salient items (e.g., suspicious distortions), we evade the human-specific bottom-up attention to generate visually-natural camouflages which are correlated to the scenario context. We conduct extensive experiments in both the digital and physical world for classification and detection tasks on up-to-date models (e.g., Yolo-V5) and significantly demonstrate that our method outperforms state-of-the-art methods.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

10/18/2021

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Although deep-learning based video recognition models have achieved rema...
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
04/12/2019

Big but Imperceptible Adversarial Perturbations via Semantic Manipulation

Machine learning, especially deep learning, is widely applied to a range...
09/16/2021

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...
10/28/2018

Robust Audio Adversarial Example for a Physical Attack

The success of deep learning in recent years has raised concerns about a...
02/10/2020

ABBA: Saliency-Regularized Motion-Based Adversarial Blur Attack

Deep neural networks are vulnerable to noise-based adversarial examples,...
09/29/2021

On Brightness Agnostic Adversarial Examples Against Face Recognition Systems

This paper introduces a novel adversarial example generation method agai...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.