Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

by   Xinwei Zhao, et al.

Recently, physical domain adversarial attacks have drawn significant attention from the machine learning community. One important attack proposed by Eykholt et al. can fool a classifier by placing black and white stickers on an object such as a road sign. While this attack may pose a significant threat to visual classifiers, there are currently no defenses designed to protect against this attack. In this paper, we propose new defenses that can protect against multi-sticker attacks. We present defensive strategies capable of operating when the defender has full, partial, and no prior information about the attack. By conducting extensive experiments, we show that our proposed defenses can outperform existing defenses against physical attacks when presented with a multi-sticker attack.


Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks

Adversarial attacks optimize against models to defeat defenses. Existing...

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...

An Extensible Framework for Quantifying the Coverage of Defenses Against Untrusted Foundries

The transistors used to construct Integrated Circuits (ICs) continue to ...

A Little Is Enough: Circumventing Defenses For Distributed Learning

Distributed learning is central for large-scale training of deep-learnin...

Defending against malicious peripherals with Cinch

Malicious peripherals designed to attack their host computers are a grow...

Towards Effective and Efficient Padding Machines for Tor

Tor recently integrated a circuit padding framework for creating padding...

Natural Backdoor Datasets

Extensive literature on backdoor poison attacks has studied attacks and ...