RayS: A Ray Searching Method for Hard-label Adversarial Attack

06/23/2020
by   Jinghui Chen, et al.
14

Deep neural networks are vulnerable to adversarial attacks. Among different attack settings, the most challenging yet the most practical one is the hard-label setting where the attacker only has access to the hard-label output (prediction label) of the target model. Previous attempts are neither effective enough in terms of attack success rate nor efficient enough in terms of query complexity under the widely used L_∞ norm threat model. In this paper, we present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency. Unlike previous works, we reformulate the continuous problem of finding the closest decision boundary into a discrete problem that does not require any zeroth-order gradient estimation. In the meantime, all unnecessary searches are eliminated via a fast check step. This significantly reduces the number of queries needed for our hard-label attack. Moreover, interestingly, we found that the proposed RayS attack can also be used as a sanity check for possible "falsely robust" models. On several recently proposed defenses that claim to achieve the state-of-the-art robust accuracy, our attack method demonstrates that the current white-box/black-box attacks could still give a false sense of security and the robust accuracy drop between the most popular PGD attack and RayS attack could be as large as 28%. We believe that our proposed RayS attack could help identify falsely robust models that beat most white-box/black-box attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2021

PredCoin: Defense against Query-based Hard-label Attack

Many adversarial attacks and defenses have recently been proposed for De...
research
07/12/2018

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

We study the problem of attacking a machine learning model in the hard-l...
research
10/01/2022

DeltaBound Attack: Efficient decision-based attack in low queries regime

Deep neural networks and other machine learning systems, despite being e...
research
04/21/2023

Launching a Robust Backdoor Attack under Capability Constrained Scenarios

As deep neural networks continue to be used in critical domains, concern...
research
02/17/2020

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

We present Survival-OPT, a physical adversarial example algorithm in the...
research
10/13/2021

Adversarial Attack across Datasets

It has been observed that Deep Neural Networks (DNNs) are vulnerable to ...
research
11/15/2021

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

One major problem in black-box adversarial attacks is the high query com...

Please sign up or login with your details

Forgot password? Click here to reset