Zero-Query Transfer Attacks on Context-Aware Object Detectors

03/29/2022
by   Zikui Cai, et al.
0

Adversarial attacks perturb images such that a deep neural network produces incorrect classification results. A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check, wherein, if the detected objects are not consistent with an appropriately defined context, then an attack is suspected. Stronger attacks are needed to fool such context-aware detectors. We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check of black-box object detectors operating on complex, natural scenes. Unlike many black-box attacks that perform repeated attempts and open themselves to detection, we assume a "zero-query" setting, where the attacker has no knowledge of the classification decisions of the victim system. First, we derive multiple attack plans that assign incorrect labels to victim objects in a context-consistent manner. Then we design and use a novel data structure that we call the perturbation success probability matrix, which enables us to filter the attack plans and choose the one most likely to succeed. This final attack plan is implemented using a perturbation-bounded adversarial attack algorithm. We compare our zero-query attack against a few-query scheme that repeatedly checks if the victim system is fooled. We also compare against state-of-the-art context-agnostic attacks. Against a context-aware defense, the fooling rate of our zero-query approach is significantly higher than context-agnostic approaches and higher than that achievable with up to three rounds of the few-query scheme.

READ FULL TEXT

page 4

page 5

page 13

page 14

research
02/27/2023

GLOW: Global Layout Aware Attacks for Object Detection

Adversarial attacks aims to perturb images such that a predictor outputs...
research
12/06/2021

Context-Aware Transfer Attacks for Object Detection

Blackbox transfer attacks for image classifiers have been extensively st...
research
09/10/2021

A Strong Baseline for Query Efficient Attacks in a Black Box Setting

Existing black box search methods have achieved high success rate in gen...
research
06/17/2022

Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization

We focus on the problem of adversarial attacks against models on discret...
research
09/20/2022

GAMA: Generative Adversarial Multi-Object Scene Attacks

The majority of methods for crafting adversarial attacks have focused on...
research
10/01/2019

An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack

There are two major paradigms of white-box adversarial attacks that atte...

Please sign up or login with your details

Forgot password? Click here to reset