Minority Reports Defense: Defending Against Adversarial Patches

04/28/2020
by   Michael McCoyd, et al.
8

Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 5

page 6

02/25/2020

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

Patch adversarial attacks on images, in which the attacker can distort p...
04/30/2021

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...
06/10/2020

Scalable Backdoor Detection in Neural Networks

Recently, it has been shown that deep learning models are vulnerable to ...
03/16/2021

Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches

The security of object detection systems has attracted increasing attent...
08/11/2020

Localizing Patch Points From One Exploit

Automatic patch generation can significantly reduce the window of exposu...
10/27/2021

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Adversarial patch attacks that craft the pixels in a confined region of ...
06/09/2021

We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature

Recently, the object detection based on deep learning has proven to be v...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.