ADC: Adversarial attacks against object Detection that evade Context consistency checks

10/24/2021
by   Mingjun Yin, et al.
0

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples, which are slightly perturbed input images which lead DNNs to make wrong predictions. To protect from such examples, various defense strategies have been proposed. A very recent defense strategy for detecting adversarial examples, that has been shown to be robust to current attacks, is to check for intrinsic context consistencies in the input data, where context refers to various relationships (e.g., object-to-object co-occurrence relationships) in images. In this paper, we show that even context consistency checks can be brittle to properly crafted adversarial examples and to the best of our knowledge, we are the first to do so. Specifically, we propose an adaptive framework to generate examples that subvert such defenses, namely, Adversarial attacks against object Detection that evade Context consistency checks (ADC). In ADC, we formulate a joint optimization problem which has two attack goals, viz., (i) fooling the object detector and (ii) evading the context consistency check system, at the same time. Experiments on both PASCAL VOC and MS COCO datasets show that examples generated with ADC fool the object detector with a success rate of over 85 proposed context consistency checks, with a bypassing rate of over 80 cases. Our results suggest that how to robustly model context and check its consistency, is still an open problem.

READ FULL TEXT

page 2

page 8

research
08/19/2021

Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes

Vision systems that deploy Deep Neural Networks (DNNs) are known to be v...
research
10/11/2018

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Deep Neural Networks (DNNs) have been widely applied in various recognit...
research
09/13/2018

Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

Deep neural networks (DNNs) are known vulnerable to adversarial attacks....
research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...
research
11/13/2020

Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection

Deep neural network approaches have demonstrated high performance in obj...
research
07/24/2021

Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them

Making classifiers robust to adversarial examples is hard. Thus, many de...
research
08/23/2021

Multi-Expert Adversarial Attack Detection in Person Re-identification Using Context Inconsistency

The success of deep neural networks (DNNs) haspromoted the widespread ap...

Please sign up or login with your details

Forgot password? Click here to reset