Attacking Object Detectors via Imperceptible Patches on Background

09/16/2018
by   Yuezun Li, et al.
0

Deep neural networks have been proven vulnerable against adversarial perturbations. Recent works succeeded to generate adversarial perturbations on either the entire image or on the target of interests to corrupt object detectors. In this paper, we investigate the vulnerability of object detectors from a new perspective --- adding minimal perturbations on small background patches outside of targets to fail the detection results. Our work focuses on attacking the common component in the state-of-the-art detectors (e.g. Faster R-CNN), Region Proposal Networks (RPNs). As the receptive fields generated by RPN is often larger than the proposals themselves, we propose a novel method to generate background perturbation patches, and show that the perturbations solely outside of the targets can severely damage the performance of multiple types of detectors by simultaneously decreasing the true positives and increasing the false positives. We demonstrate the efficacy of our method on 5 different state-of-the-art object detectors on MS COCO 2014 dataset.

READ FULL TEXT

page 1

page 7

research
09/16/2018

Robust Adversarial Perturbation on Deep Proposal-based Models

Adversarial noises are useful tools to probe the weakness of deep learni...
research
03/24/2020

Adversarial Perturbations Fool Deepfake Detectors

This work uses adversarial perturbations to enhance deepfake images and ...
research
02/27/2023

CBA: Contextual Background Attack against Optical Aerial Detection in the Physical World

Patch-based physical attacks have increasingly aroused concerns. Howev...
research
11/30/2019

Design and Interpretation of Universal Adversarial Patches in Face Detection

We consider universal adversarial patches for faces - small visual eleme...
research
06/05/2018

AdvDetPatch: Attacking Object Detectors with Adversarial Patches

Object detectors have witnessed great progress in recent years and have ...
research
06/05/2018

DPatch: Attacking Object Detectors with Adversarial Patches

Object detectors have witnessed great progress in recent years and have ...
research
08/01/2016

Early Methods for Detecting Adversarial Images

Many machine learning classifiers are vulnerable to adversarial perturba...

Please sign up or login with your details

Forgot password? Click here to reset