Feature Separation and Recalibration for Adversarial Robustness

03/24/2023
by   Woo Jae Kim, et al.
0

Deep neural networks are susceptible to adversarial attacks due to the accumulation of perturbations in the feature level, and numerous works have boosted model robustness by deactivating the non-robust feature activations that cause model mispredictions. However, we claim that these malicious activations still contain discriminative cues and that with recalibration, they can capture additional useful information for correct model predictions. To this end, we propose a novel, easy-to-plugin approach named Feature Separation and Recalibration (FSR) that recalibrates the malicious, non-robust activations for more robust feature maps through Separation and Recalibration. The Separation part disentangles the input feature map into the robust feature with activations that help the model make correct predictions and the non-robust feature with activations that are responsible for model mispredictions upon adversarial attack. The Recalibration part then adjusts the non-robust activations to restore the potentially useful cues for model predictions. Extensive experiments verify the superiority of FSR compared to traditional deactivation techniques and demonstrate that it improves the robustness of existing adversarial training methods by up to 8.57 overhead. Codes are available at https://github.com/wkim97/FSR.

READ FULL TEXT
research
02/10/2021

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

We investigate the adversarial robustness of CNNs from the perspective o...
research
12/01/2021

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

In response to the threat of adversarial examples, adversarial training ...
research
08/23/2021

SegMix: Co-occurrence Driven Mixup for Semantic Segmentation and Adversarial Robustness

In this paper, we present a strategy for training convolutional neural n...
research
08/13/2020

Feature Binding with Category-Dependant MixUp for Semantic Segmentation and Adversarial Robustness

In this paper, we present a strategy for training convolutional neural n...
research
10/23/2019

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Adversarial attacks and defenses are currently active areas of research ...
research
03/10/2022

Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness

Robustness of deep neural networks (DNNs) to malicious perturbations is ...
research
07/28/2020

Reachable Sets of Classifiers Regression Models: (Non-)Robustness Analysis and Robust Training

Neural networks achieve outstanding accuracy in classification and regre...

Please sign up or login with your details

Forgot password? Click here to reset