Clustering Effect of (Linearized) Adversarial Robust Models

11/25/2021
by   Yang Bai, et al.
0

Adversarial robustness has received increasing attention along with the study of adversarial examples. So far, existing works show that robust models not only obtain robustness against various adversarial attacks but also boost the performance in some downstream tasks. However, the underlying mechanism of adversarial robustness is still not clear. In this paper, we interpret adversarial robustness from the perspective of linear components, and find that there exist some statistical properties for comprehensively robust models. Specifically, robust models show obvious hierarchical clustering effect on their linearized sub-networks, when removing or replacing all non-linear components (e.g., batch normalization, maximum pooling, or activation layers). Based on these observations, we propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting. Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.

READ FULL TEXT

page 5

page 7

research
09/14/2022

On the interplay of adversarial robustness and architecture components: patches, convolution and attention

In recent years novel architecture components for image classification h...
research
07/24/2023

Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation

Language Models today provide a high accuracy across a large number of d...
research
03/11/2021

Improving Adversarial Robustness via Channel-wise Activation Suppressing

The study of adversarial examples and their activation has attracted sig...
research
02/22/2023

On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective

ChatGPT is a recent chatbot service released by OpenAI and is receiving ...
research
06/28/2021

Adversarial Robustness of Streaming Algorithms through Importance Sampling

In this paper, we introduce adversarially robust streaming algorithms fo...
research
10/26/2021

Can't Fool Me: Adversarially Robust Transformer for Video Understanding

Deep neural networks have been shown to perform poorly on adversarial ex...
research
08/26/2021

A Hierarchical Assessment of Adversarial Severity

Adversarial Robustness is a growing field that evidences the brittleness...

Please sign up or login with your details

Forgot password? Click here to reset