Improving Adversarial Robustness via Channel-wise Activation Suppressing

03/11/2021
by   Yang Bai, et al.
0

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs). Different from existing works, in this paper, we highlight two new characteristics of adversarial examples from the channel-wise activation perspective: 1) the activation magnitudes of adversarial examples are higher than that of natural examples; and 2) the channels are activated more uniformly by adversarial examples than natural examples. We find that the state-of-the-art defense adversarial training has addressed the first issue of high activation magnitudes via training on adversarial examples, while the second issue of uniform activation remains. This motivates us to suppress redundant activation from being activated by adversarial perturbations via a Channel-wise Activation Suppressing (CAS) strategy. We show that CAS can train a model that inherently suppresses adversarial activation, and can be easily applied to existing defense methods to further improve their robustness. Our work provides a simple but generic training strategy for robustifying the intermediate layer activation of DNNs.

READ FULL TEXT

page 4

page 14

research
02/24/2022

Improving Robustness of Convolutional Neural Networks Using Element-Wise Activation Scaling

Recent works reveal that re-calibrating the intermediate activation of a...
research
03/05/2018

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...
research
01/28/2019

Adversarial Examples Target Topological Holes in Deep Networks

It is currently unclear why adversarial examples are easy to construct f...
research
01/22/2021

Adaptive Neighbourhoods for the Discovery of Adversarial Examples

Deep Neural Networks (DNNs) have often supplied state-of-the-art results...
research
06/20/2022

Understanding Robust Learning through the Lens of Representation Similarities

Representation learning, i.e. the generation of representations useful f...
research
11/25/2021

Clustering Effect of (Linearized) Adversarial Robust Models

Adversarial robustness has received increasing attention along with the ...
research
02/09/2019

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...

Please sign up or login with your details

Forgot password? Click here to reset