Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

07/12/2018
by   Minhao Cheng, et al.
8

We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. This is a very challenging problem since the direct extension of state-of-the-art white-box attacks (e.g., CW or PGD) to the hard-label black-box setting will require minimizing a non-continuous step function, which is combinatorial and cannot be solved by a gradient-based optimizer. The only current approach is based on random walk on the boundary, which requires lots of queries and lacks convergence guarantees. We propose a novel way to formulate the hard-label black-box attack as a real-valued optimization problem which is usually continuous and can be solved by any zeroth order optimization algorithm. For example, using the Randomized Gradient-Free method, we are able to bound the number of iterations needed for our algorithm to achieve stationary points. We demonstrate that our proposed method outperforms the previous random walk approach to attacking convolutional neural networks on MNIST, CIFAR, and ImageNet datasets. More interestingly, we show that the proposed algorithm can also be used to attack other discrete and non-continuous machine learning models, such as Gradient Boosting Decision Trees (GBDT).

READ FULL TEXT

page 5

page 8

research
06/23/2020

RayS: A Ray Searching Method for Hard-label Adversarial Attack

Deep neural networks are vulnerable to adversarial attacks. Among differ...
research
09/24/2019

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

We study the most practical problem setup for evaluating adversarial rob...
research
10/01/2022

DeltaBound Attack: Efficient decision-based attack in low queries regime

Deep neural networks and other machine learning systems, despite being e...
research
04/11/2019

Black-Box Decision based Adversarial Attack with Symmetric α-stable Distribution

Developing techniques for adversarial attack and defense is an important...
research
11/15/2021

Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

One major problem in black-box adversarial attacks is the high query com...
research
02/20/2018

Using Automatic Generation of Relaxation Constraints to Improve the Preimage Attack on 39-step MD4

In this paper we construct preimage attack on the truncated variant of t...
research
02/17/2020

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

We present Survival-OPT, a physical adversarial example algorithm in the...

Please sign up or login with your details

Forgot password? Click here to reset