Minority Reports Defense: Defending Against Adversarial Patches

04/28/2020
by   Michael McCoyd, et al.
8

Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

research
07/05/2022

PatchZero: Defending against Adversarial Patch Attacks by Detecting and Zeroing the Patch

Adversarial patch attacks mislead neural networks by injecting adversari...
research
04/30/2021

IPatch: A Remote Adversarial Patch

Applications such as autonomous vehicles and medical screening use deep ...
research
08/13/2022

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

Backdoor attacks have been shown to be a serious security threat against...
research
04/05/2023

FPGA-Patch: Mitigating Remote Side-Channel Attacks on FPGAs using Dynamic Patch Generation

We propose FPGA-Patch, the first-of-its-kind defense that leverages auto...
research
11/04/2022

Patch DCT vs LeNet

This paper compares the performance of a NN taking the output of a DCT (...
research
03/16/2021

Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches

The security of object detection systems has attracted increasing attent...
research
10/27/2021

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Adversarial patch attacks that craft the pixels in a confined region of ...

Please sign up or login with your details

Forgot password? Click here to reset