Gradient Concealment: Free Lunch for Defending Adversarial Attacks

05/21/2022
by   Sen Pei, et al.
0

Recent studies show that the deep neural networks (DNNs) have achieved great success in various tasks. However, even the state-of-the-art deep learning based classifiers are extremely vulnerable to adversarial examples, resulting in sharp decay of discrimination accuracy in the presence of enormous unknown attacks. Given the fact that neural networks are widely used in the open world scenario which can be safety-critical situations, mitigating the adversarial effects of deep learning methods has become an urgent need. Generally, conventional DNNs can be attacked with a dramatically high success rate since their gradient is exposed thoroughly in the white-box scenario, making it effortless to ruin a well trained classifier with only imperceptible perturbations in the raw data space. For tackling this problem, we propose a plug-and-play layer that is training-free, termed as Gradient Concealment Module (GCM), concealing the vulnerable direction of gradient while guaranteeing the classification accuracy during the inference time. GCM reports superior defense results on the ImageNet classification benchmark, improving up to 63.41% top-1 attack robustness (AR) when faced with adversarial inputs compared to the vanilla DNNs. Moreover, we use GCM in the CVPR 2022 Robust Classification Challenge, currently achieving 2nd place in Phase II with only a tiny version of ConvNext. The code will be made available.

READ FULL TEXT

page 2

page 9

research
08/23/2020

Developing and Defeating Adversarial Examples

Breakthroughs in machine learning have resulted in state-of-the-art deep...
research
08/21/2019

Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Deep neural networks (DNNs) have demonstrated impressive performance on ...
research
06/16/2019

Defending Against Adversarial Attacks Using Random Forests

As deep neural networks (DNNs) have become increasingly important and po...
research
02/26/2020

Defending against Backdoor Attack on Deep Neural Networks

Although deep neural networks (DNNs) have achieved a great success in va...
research
09/17/2017

Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification

Deep neural networks (DNNs) have transformed several artificial intellig...
research
07/25/2022

Domain Decorrelation with Potential Energy Ranking

Machine learning systems, especially the methods based on deep learning,...
research
09/14/2020

Into the unknown: Active monitoring of neural networks

Machine-learning techniques achieve excellent performance in modern appl...

Please sign up or login with your details

Forgot password? Click here to reset