Defensive Patches for Robust Recognition in the Physical World

04/13/2022
by   Jiakai Wang, et al.
0

To operate in real-world high-stakes environments, deep learning systems have to endure noises that have been continuously thwarting their robustness. Data-end defense, which improves robustness by operations on input data instead of modifying models, has attracted intensive attention due to its feasibility in practice. However, previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models. Motivated by the fact that robust recognition depends on both local and global features, we propose a defensive patch generation framework to address these problems by helping models better exploit these features. For the generalization against diverse noises, we inject class-specific identifiable patterns into a confined local patch prior, so that defensive patches could preserve more recognizable features towards specific classes, leading models for better recognition under noises. For the transferability across multiple models, we guide the defensive patches to capture more global feature correlations within a class, so that they could activate model-shared global perceptions and transfer better among models. Our defensive patches show great potentials to improve application robustness in practice by simply sticking them around target objects. Extensive experiments show that we outperform others by large margins (improve 20+% accuracy for both adversarial and corruption robustness on average in the digital and physical world). Our codes are available at https://github.com/nlsde-safety-team/DefensivePatch

READ FULL TEXT

page 1

page 4

page 7

research
03/14/2020

Certified Defenses for Adversarial Patches

Adversarial patch attacks are among one of the most practical threat mod...
research
09/16/2021

Harnessing Perceptual Adversarial Patches for Crowd Counting

Crowd counting, which is significantly important for estimating the numb...
research
10/27/2021

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

Adversarial patch attacks that craft the pixels in a confined region of ...
research
05/17/2019

Neither Global Nor Local: A Hierarchical Robust Subspace Clustering For Image Data

In this paper, we consider the problem of subspace clustering in presenc...
research
03/07/2022

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches

Adversarial patches are optimized contiguous pixel blocks in an input im...
research
02/14/2023

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

For many real-world applications, the user-generated inputs usually cont...
research
09/19/2019

Training Robust Deep Neural Networks via Adversarial Noise Propagation

Deep neural networks have been found vulnerable to noises like adversari...

Please sign up or login with your details

Forgot password? Click here to reset