You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for Object Detectors

09/30/2021
by   Zijian Zhu, et al.
1

Blind spots or outright deceit can bedevil and deceive machine learning models. Unidentified objects such as digital "stickers," also known as adversarial patches, can fool facial recognition systems, surveillance systems and self-driving cars. Fortunately, most existing adversarial patches can be outwitted, disabled and rejected by a simple classification network called an adversarial patch detector, which distinguishes adversarial patches from original images. An object detector classifies and predicts the types of objects within an image, such as by distinguishing a motorcyclist from the motorcycle, while also localizing each object's placement within the image by "drawing" so-called bounding boxes around each object, once again separating the motorcyclist from the motorcycle. To train detectors even better, however, we need to keep subjecting them to confusing or deceitful adversarial patches as we probe for the models' blind spots. For such probes, we came up with a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with small and texture-consistent adversarial patches, making these adversaries less likely to be recognized. Concretely, we use several geometric primitives to model the shapes and positions of the patches. To enhance our attack performance, we also assign different weights to the bounding boxes in terms of loss function. Our experiments on the common detection dataset COCO as well as the driving-video dataset D2-City show that LDAP is an effective attack method, and can resist the adversarial patch detector.

READ FULL TEXT

page 2

page 8

page 9

page 10

page 12

page 13

research
06/20/2019

On Physical Adversarial Patches for Object Detection

In this paper, we demonstrate a physical adversarial patch attack agains...
research
06/19/2023

Eigenpatches – Adversarial Patches from Principal Components

Adversarial patches are still a simple yet powerful white box attack tha...
research
07/16/2023

Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based on Diffusion Model for Object Detector

Many physical adversarial patch generation methods are widely proposed t...
research
09/30/2019

Adversarial Patches Exploiting Contextual Reasoning in Object Detection

The usefulness of spatial context in most fast object detection algorith...
research
11/08/2018

Massively Parallel Stackless Ray Tracing of Catmull-Clark Subdivision Surfaces

We present a fast and efficient method for intersecting rays with Catmul...
research
02/05/2021

DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks

State-of-the-art object detectors are vulnerable to localized patch hidi...
research
06/05/2018

AdvDetPatch: Attacking Object Detectors with Adversarial Patches

Object detectors have witnessed great progress in recent years and have ...

Please sign up or login with your details

Forgot password? Click here to reset