Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection

06/05/2020
by   Omid Mohamad Nezami, et al.
4

Many recent studies have shown that deep neural models are vulnerable to adversarial samples: images with imperceptible perturbations, for example, can fool image classifiers. In this paper, we generate adversarial examples for object detection, which entails detecting bounding boxes around multiple objects present in the image and classifying them at the same time, making it a harder task than against image classification. We specifically aim to attack the widely used Faster R-CNN by changing the predicted label for a particular object in an image: where prior work has targeted one specific object (a stop sign), we generalise to arbitrary objects, with the key challenge being the need to change the labels of all bounding boxes for all instances of that object type. To do so, we propose a novel method, named Pick-Object-Attack. Pick-Object-Attack successfully adds perturbations only to bounding boxes for the targeted object, preserving the labels of other detected objects in the image. In terms of perceptibility, the perturbations induced by the method are very small. Furthermore, for the first time, we examine the effect of adversarial attacks on object detection in terms of a downstream task, image captioning; we show that where a method that can modify all object types leads to very obvious changes in captions, the changes from our constrained attack are much less apparent.

READ FULL TEXT

page 3

page 20

page 24

page 25

research
01/22/2022

Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection

Object detection has been widely used in many safety-critical tasks, suc...
research
01/12/2020

Membership Inference Attacks Against Object Detection Models

Machine learning models can leak information about the dataset they trai...
research
04/16/2018

Robust Physical Adversarial Attack on Faster R-CNN Object Detector

Given the ability to directly manipulate image pixels in the digital inp...
research
12/13/2022

Object-fabrication Targeted Attack for Object Detection

Recent researches show that the deep learning based object detection is ...
research
02/06/2019

Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples

We demonstrated that Non-Maximum Suppression (NMS), which is commonly us...
research
07/26/2017

Deep Interactive Region Segmentation and Captioning

With recent innovations in dense image captioning, it is now possible to...
research
09/22/2021

Pix2seq: A Language Modeling Framework for Object Detection

This paper presents Pix2Seq, a simple and generic framework for object d...

Please sign up or login with your details

Forgot password? Click here to reset