Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

06/24/2020
by   Huiying Li, et al.
12

The vulnerability of deep neural networks (DNNs) to adversarial examples is well documented. Under the strong white-box threat model, where attackers have full access to DNN internals, recent work has produced continual advancements in defenses, often followed by more powerful attacks that break them. Meanwhile, research on the more realistic black-box threat model has focused almost entirely on reducing the query-cost of attacks, making them increasingly practical for ML models already deployed today. This paper proposes and evaluates Blacklight, a new defense against black-box adversarial attacks. Blacklight targets a key property of black-box attacks: to compute adversarial examples, they produce sequences of highly similar images while trying to minimize the distance from some initial benign input. To detect an attack, Blacklight computes for each query image a compact set of one-way hash values that form a probabilistic fingerprint. Variants of an image produce nearly identical fingerprints, and fingerprint generation is robust against manipulation. We evaluate Blacklight on 5 state-of-the-art black-box attacks, across a variety of models and classification tasks. While the most efficient attacks take thousands or tens of thousands of queries to complete, Blacklight identifies them all, often after only a handful of queries. Blacklight is also robust against several powerful countermeasures, including an optimal black-box attack that approximates white-box attacks in efficiency. Finally, Blacklight significantly outperforms the only known alternative in both detection coverage of attack queries and resistance against persistent attackers.

READ FULL TEXT

page 3

page 4

page 8

page 9

page 10

page 13

page 14

page 16

research
09/24/2020

Improving Query Efficiency of Black-box Adversarial Attack

Deep neural networks (DNNs) have demonstrated excellent performance on v...
research
05/21/2022

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models

Server breaches are an unfortunate reality on today's Internet. In the c...
research
02/04/2021

PredCoin: Defense against Query-based Hard-label Attack

Many adversarial attacks and defenses have recently been proposed for De...
research
08/05/2022

FBI: Fingerprinting models with Benign Inputs

Recent advances in the fingerprinting of deep neural networks detect ins...
research
12/05/2019

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Recent research has shown that Deep Neural Networks (DNNs) for image cla...
research
03/27/2022

How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

The lack of adversarial robustness has been recognized as an important i...
research
08/23/2023

SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks

Machine Learning (ML) systems are vulnerable to adversarial examples, pa...

Please sign up or login with your details

Forgot password? Click here to reset