Adversarial amplitude swap towards robust image classifiers

03/14/2022
by   Chun Yang Tan, et al.
0

The vulnerability of convolutional neural networks (CNNs) to image perturbations such as common corruptions and adversarial perturbations has recently been investigated from the perspective of frequency. In this study, we investigate the effect of the amplitude and phase spectra of adversarial images on the robustness of CNN classifiers. Extensive experiments revealed that the images generated by combining the amplitude spectrum of adversarial images and the phase spectrum of clean images accommodates moderate and general perturbations, and training with these images equips a CNN classifier with more general robustness, performing well under both common corruptions and adversarial perturbations. We also found that two types of overfitting (catastrophic overfitting and robust overfitting) can be circumvented by the aforementioned spectrum recombination. We believe that these results contribute to the understanding and the training of truly robust classifiers.

READ FULL TEXT

page 2

page 9

research
05/15/2023

Exploiting Frequency Spectrum of Adversarial Images for General Robustness

In recent years, there has been growing concern over the vulnerability o...
research
09/27/2022

Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise

Although numerous methods to reduce the overfitting of convolutional neu...
research
08/19/2021

Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Recently, the generalization behavior of Convolutional Neural Networks (...
research
11/25/2021

Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

Recent work has shown that imperceptible perturbations can be applied to...
research
09/19/2020

SecDD: Efficient and Secure Method for Remotely Training Neural Networks

We leverage what are typically considered the worst qualities of deep le...
research
05/29/2018

Classification Stability for Sparse-Modeled Signals

Despite their impressive performance, deep convolutional neural networks...
research
02/22/2020

Polarizing Front Ends for Robust CNNs

The vulnerability of deep neural networks to small, adversarially design...

Please sign up or login with your details

Forgot password? Click here to reset