Adversarial Semantic Contour for Object Detection

09/30/2021
by   Yichi Zhang, et al.
0

Modern object detectors are vulnerable to adversarial examples, which brings potential risks to numerous applications, e.g., self-driving car. Among attacks regularized by ℓ_p norm, ℓ_0-attack aims to modify as few pixels as possible. Nevertheless, the problem is nontrivial since it generally requires to optimize the shape along with the texture simultaneously, which is an NP-hard problem. To address this issue, we propose a novel method of Adversarial Semantic Contour (ASC) guided by object contour as prior. With this prior, we reduce the searching space to accelerate the ℓ_0 optimization, and also introduce more semantic information which should affect the detectors more. Based on the contour, we optimize the selection of modified pixels via sampling and their colors with gradient descent alternately. Extensive experiments demonstrate that our proposed ASC outperforms the most commonly manually designed patterns (e.g., square patches and grids) on task of disappearing. By modifying no more than 5% and 3.5% of the object area respectively, our proposed ASC can successfully mislead the mainstream object detectors including the SSD512, Yolov4, Mask RCNN, Faster RCNN, etc.

READ FULL TEXT

page 1

page 3

research
10/17/2022

Differential Evolution based Dual Adversarial Camouflage: Fooling Human Eyes and Object Detectors

Recent studies reveal that deep neural network (DNN) based object detect...
research
03/07/2022

Adversarial Texture for Fooling Person Detectors in the Physical World

Nowadays, cameras equipped with AI systems can capture and analyze image...
research
12/01/2018

FineFool: Fine Object Contour Attack via Attention

Machine learning models have been shown vulnerable to adversarial attack...
research
03/23/2021

RPATTACK: Refined Patch Attack on General Object Detectors

Nowadays, general object detectors like YOLO and Faster R-CNN as well as...
research
09/04/2023

Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF

Deep neural networks (DNNs) have been proven extremely susceptible to ad...
research
02/27/2023

CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World

Patch-based physical attacks have increasingly aroused concerns. Howev...
research
09/11/2023

SHIFT3D: Synthesizing Hard Inputs For Tricking 3D Detectors

We present SHIFT3D, a differentiable pipeline for generating 3D shapes t...

Please sign up or login with your details

Forgot password? Click here to reset