Towards Building More Robust Models with Frequency Bias

07/19/2023
by   Qingwen Bu, et al.
0

The vulnerability of deep neural networks to adversarial samples has been a major impediment to their broad applications, despite their success in various fields. Recently, some works suggested that adversarially-trained models emphasize the importance of low-frequency information to achieve higher robustness. While several attempts have been made to leverage this frequency characteristic, they have all faced the issue that applying low-pass filters directly to input images leads to irreversible loss of discriminative information and poor generalizability to datasets with distinct frequency features. This paper presents a plug-and-play module called the Frequency Preference Control Module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations, providing better utilization of frequency in robust learning. Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework, further improving model robustness across different architectures and datasets. Additionally, experiments were conducted to examine how the frequency bias of robust models impacts the adversarial training process and its final robustness, revealing interesting insights.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2019

Adversarial Defense by Suppressing High-frequency Components

Recent works show that deep neural networks trained on image classificat...
research
05/09/2022

How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?

Model robustness is vital for the reliable deployment of machine learnin...
research
01/12/2023

Phase-shifted Adversarial Training

Adversarial training has been considered an imperative component for saf...
research
12/24/2022

Frequency Regularization for Improving Adversarial Robustness

Deep neural networks are incredibly vulnerable to crafted, human-imperce...
research
03/16/2022

What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study

Although many fields have witnessed the superior performance brought abo...
research
05/05/2022

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems

Adversarial attack perturbs an image with an imperceptible noise, leadin...
research
05/06/2020

Towards Frequency-Based Explanation for Robust CNN

Current explanation techniques towards a transparent Convolutional Neura...

Please sign up or login with your details

Forgot password? Click here to reset