Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

02/17/2020
by   Ryan Feng, et al.
5

We present Survival-OPT, a physical adversarial example algorithm in the black-box hard-label setting where the attacker only has access to the model prediction class label. Assuming such limited access to the model is more relevant for settings such as proprietary cyber-physical and cloud systems than the whitebox setting assumed by prior work. By leveraging the properties of physical attacks, we create a novel approach based on the survivability of perturbations corresponding to physical transformations. Through simply querying the model for hard-label predictions, we optimize perturbations to survive in many different physical conditions and show that adversarial examples remain a security risk to cyber-physical systems (CPSs) even in the hard-label threat model. We show that Survival-OPT is query-efficient and robust: using fewer than 200K queries, we successfully attack a stop sign to be misclassified as a speed limit 30 km/hr sign in 98.5 drive-by setting. Survival-OPT also outperforms our baseline combination of existing hard-label and physical approaches, which required over 10x more queries for less robust results.

READ FULL TEXT

page 2

page 4

page 11

page 12

page 13

research
09/24/2019

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

We study the most practical problem setup for evaluating adversarial rob...
research
07/20/2018

Physical Adversarial Examples for Object Detectors

Deep neural networks (DNNs) are vulnerable to adversarial examples-malic...
research
06/23/2020

RayS: A Ray Searching Method for Hard-label Adversarial Attack

Deep neural networks are vulnerable to adversarial attacks. Among differ...
research
10/01/2022

DeltaBound Attack: Efficient decision-based attack in low queries regime

Deep neural networks and other machine learning systems, despite being e...
research
07/12/2018

Query-Efficient Hard-label Black-box Attack:An Optimization-based Approach

We study the problem of attacking a machine learning model in the hard-l...
research
01/20/2022

Learning-based Hybrid Local Search for the Hard-label Textual Attack

Deep neural networks are vulnerable to adversarial examples in Natural L...
research
04/23/2022

Towards Data-Free Model Stealing in a Hard Label Setting

Machine learning models deployed as a service (MLaaS) are susceptible to...

Please sign up or login with your details

Forgot password? Click here to reset