Revisiting the Trade-off between Accuracy and Robustness via Weight Distribution of Filters

06/06/2023
by   Xingxing Wei, et al.
0

Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the clean accuracy will decline to a certain extent, implying a trade-off existed between the accuracy and robustness. In this paper, we firstly empirically find an obvious distinction between standard and robust models in the filters' weight distribution of the same architecture, and then theoretically explain this phenomenon in terms of the gradient regularization, which shows this difference is an intrinsic property for DNNs, and thus a static network architecture is difficult to improve the accuracy and robustness at the same time. Secondly, based on this observation, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a “divide and rule" weight strategy. The AW-Net dynamically adjusts network's weights based on regulation signals generated by an adversarial detector, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potentiality to enhance the accuracy and robustness simultaneously. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models.

READ FULL TEXT

page 1

page 2

research
04/21/2021

Dual Head Adversarial Training

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
02/24/2023

Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?

Given a robust model trained to be resilient to one or multiple types of...
research
07/09/2022

Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
08/20/2021

AdvDrop: Adversarial Attack to DNNs by Dropping Information

Human can easily recognize visual objects with lost information: even lo...
research
10/15/2019

Understanding Misclassifications by Attributes

In this paper, we aim to understand and explain the decisions of deep ne...
research
08/31/2023

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

This paper addresses the tradeoff between standard accuracy on clean exa...
research
11/28/2022

Rethinking the Number of Shots in Robust Model-Agnostic Meta-Learning

Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a...

Please sign up or login with your details

Forgot password? Click here to reset