What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study

03/16/2022
by   Binxiao Huang, et al.
0

Although many fields have witnessed the superior performance brought about by deep learning, the robustness of neural networks remains an open issue. Specifically, a small adversarial perturbation on the input may cause the model to produce a completely different output. Such poor robustness implies many potential hazards, especially in security-critical applications, e.g., autonomous driving and mobile robotics. This work studies what information the adversarially trained model focuses on. Empirically, we notice that the differences between the clean and adversarial data are mainly distributed in the low-frequency region. We then find that an adversarially-trained model is more robust than its naturally-trained counterpart due to the reason that the former pays more attention to learning the dominant information in low-frequency components. In addition, we consider two common ways to improve model robustness, namely, by data augmentation and by using stronger network architectures, and understand these techniques from a frequency-domain perspective. We are hopeful this work can shed light on the design of more robust neural networks.

READ FULL TEXT

page 3

page 4

page 6

research
05/09/2022

How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?

Model robustness is vital for the reliable deployment of machine learnin...
research
07/21/2021

CNN Classifier for Just-in-Time Woodpeckers Detection and Deterrent

Woodpeckers can cause significant damage to homes, especially in suburba...
research
10/21/2021

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...
research
07/19/2023

Towards Building More Robust Models with Frequency Bias

The vulnerability of deep neural networks to adversarial samples has bee...
research
06/25/2023

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

Deep neural networks (DNNs) are incredibly vulnerable to crafted, imperc...
research
09/11/2021

RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

Deep neural networks (DNNs) are vulnerable to adversarial noises, which ...
research
11/30/2022

Efficient Adversarial Input Generation via Neural Net Patching

The adversarial input generation problem has become central in establish...

Please sign up or login with your details

Forgot password? Click here to reset