ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

10/27/2021
by   Husheng Han, et al.
0

Adversarial patch attacks that craft the pixels in a confined region of the input images show their powerful attack effectiveness in physical environments even with noises or deformations. Existing certified defenses towards adversarial patch attacks work well on small images like MNIST and CIFAR-10 datasets, but achieve very poor certified accuracy on higher-resolution images like ImageNet. It is urgent to design both robust and effective defenses against such a practical and harmful attack in industry-level larger images. In this work, we propose the certified defense methodology that achieves high provable robustness for high-resolution images and largely improves the practicality for real adoption of the certified defense. The basic insight of our work is that the adversarial patch intends to leverage localized superficial important neurons (SIN) to manipulate the prediction results. Hence, we leverage the SIN-based DNN compression techniques to significantly improve the certified accuracy, by reducing the adversarial region searching overhead and filtering the prediction noises. Our experimental results show that the certified accuracy is increased from 36.3 certified detection) to 60.4 certified defenses for practical use.

READ FULL TEXT

page 4

page 6

page 14

research
05/17/2020

PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Localized adversarial patches aim to induce misclassification in machine...
research
03/14/2020

Certified Defenses for Adversarial Patches

Adversarial patch attacks are among one of the most practical threat mod...
research
04/28/2020

Minority Reports Defense: Defending Against Adversarial Patches

Deep learning image classification is vulnerable to adversarial attack, ...
research
04/26/2021

PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches

An adversarial patch can arbitrarily manipulate image pixels within a re...
research
04/13/2022

Defensive Patches for Robust Recognition in the Physical World

To operate in real-world high-stakes environments, deep learning systems...
research
02/24/2022

Towards Effective and Robust Neural Trojan Defenses via Input Filtering

Trojan attacks on deep neural networks are both dangerous and surreptiti...
research
06/22/2023

Revisiting Image Classifier Training for Improved Certified Robust Defense against Adversarial Patches

Certifiably robust defenses against adversarial patches for image classi...

Please sign up or login with your details

Forgot password? Click here to reset