Object Hider: Adversarial Patch Attack Against Object Detectors

10/28/2020
by   Yusheng Zhao, et al.
0

Deep neural networks have been widely used in many computer vision tasks. However, it is proved that they are susceptible to small, imperceptible perturbations added to the input. Inputs with elaborately designed perturbations that can fool deep learning models are called adversarial examples, and they have drawn great concerns about the safety of deep neural networks. Object detection algorithms are designed to locate and classify objects in images or videos and they are the core of many computer vision tasks, which have great research value and wide applications. In this paper, we focus on adversarial attack on some state-of-the-art object detection models. As a practical alternative, we use adversarial patches for the attack. Two adversarial patch generation algorithms have been proposed: the heatmap-based algorithm and the consensus-based algorithm. The experiment results have shown that the proposed methods are highly effective, transferable and generic. Additionally, we have applied the proposed methods to competition "Adversarial Challenge on Object Detection" that is organized by Alibaba on the Tianchi platform and won top 7 in 1701 teams. Code is available at: https://github.com/FenHua/DetDak

READ FULL TEXT

page 2

page 3

page 4

research
10/16/2020

DPAttack: Diffused Patch Attacks against Universal Object Detection

Recently, deep neural networks (DNNs) have been widely and successfully ...
research
02/17/2022

Developing Imperceptible Adversarial Patches to Camouflage Military Assets From Computer Vision Enabled Technologies

Convolutional neural networks (CNNs) have demonstrated rapid progress an...
research
08/01/2021

An Effective and Robust Detector for Logo Detection

In recent years, intellectual property (IP), which represents literary, ...
research
07/06/2022

The Weaknesses of Adversarial Camouflage in Overhead Imagery

Machine learning is increasingly critical for analysis of the ever-growi...
research
07/25/2022

Domain Decorrelation with Potential Energy Ranking

Machine learning systems, especially the methods based on deep learning,...
research
05/18/2022

Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

Preprocessing and outlier detection techniques have both been applied to...
research
08/19/2022

Real-Time Robust Video Object Detection System Against Physical-World Adversarial Attacks

DNN-based video object detection (VOD) powers autonomous driving and vid...

Please sign up or login with your details

Forgot password? Click here to reset