Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

01/26/2021
by   Xinwei Zhao, et al.
0

Recently, physical domain adversarial attacks have drawn significant attention from the machine learning community. One important attack proposed by Eykholt et al. can fool a classifier by placing black and white stickers on an object such as a road sign. While this attack may pose a significant threat to visual classifiers, there are currently no defenses designed to protect against this attack. In this paper, we propose new defenses that can protect against multi-sticker attacks. We present defensive strategies capable of operating when the defender has full, partial, and no prior information about the attack. By conducting extensive experiments, we show that our proposed defenses can outperform existing defenses against physical attacks when presented with a multi-sticker attack.

READ FULL TEXT
05/18/2021

Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks

Adversarial attacks optimize against models to defeat defenses. Existing...
11/22/2021

Backdoor Attack through Frequency Domain

Backdoor attacks have been shown to be a serious threat against deep lea...
06/20/2019

An Extensible Framework for Quantifying the Coverage of Defenses Against Untrusted Foundries

The transistors used to construct Integrated Circuits (ICs) continue to ...
02/16/2019

A Little Is Enough: Circumventing Defenses For Distributed Learning

Distributed learning is central for large-scale training of deep-learnin...
06/04/2015

Defending against malicious peripherals with Cinch

Malicious peripherals designed to attack their host computers are a grow...
11/26/2020

Towards Effective and Efficient Padding Machines for Tor

Tor recently integrated a circuit padding framework for creating padding...
06/21/2022

Natural Backdoor Datasets

Extensive literature on backdoor poison attacks has studied attacks and ...