Exploiting Frequency Spectrum of Adversarial Images for General Robustness

05/15/2023
by   Chun Yang Tan, et al.
0

In recent years, there has been growing concern over the vulnerability of convolutional neural networks (CNNs) to image perturbations. However, achieving general robustness against different types of perturbations remains challenging, in which enhancing robustness to some perturbations (e.g., adversarial perturbations) may degrade others (e.g., common corruptions). In this paper, we demonstrate that adversarial training with an emphasis on phase components significantly improves model performance on clean, adversarial, and common corruption accuracies. We propose a frequency-based data augmentation method, Adversarial Amplitude Swap, that swaps the amplitude spectrum between clean and adversarial images to generate two novel training images: adversarial amplitude and adversarial phase images. These images act as substitutes for adversarial images and can be implemented in various adversarial training setups. Through extensive experiments, we demonstrate that our method enables the CNNs to gain general robustness against different types of perturbations and results in a uniform performance against all types of common corruptions.

READ FULL TEXT
research
03/14/2022

Adversarial amplitude swap towards robust image classifiers

The vulnerability of convolutional neural networks (CNNs) to image pertu...
research
03/08/2021

Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN

Convolutional neural networks (CNNs) have achieved beyond human-level ac...
research
10/20/2021

Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs

While some convolutional neural networks (CNNs) have surpassed human vis...
research
07/21/2023

HybridAugment++: Unified Frequency Spectra Perturbations for Model Robustness

Convolutional Neural Networks (CNN) are known to exhibit poor generaliza...
research
03/21/2021

Natural Perturbed Training for General Robustness of Neural Network Classifiers

We focus on the robustness of neural networks for classification. To per...
research
05/06/2020

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

Whereas adversarial training is employed as the main defence strategy ag...
research
11/25/2021

Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

Recent work has shown that imperceptible perturbations can be applied to...

Please sign up or login with your details

Forgot password? Click here to reset