Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

11/25/2021
by   Zhuoran Liu, et al.
0

Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i.e. images whose content cannot be used to improve a classifier during training. In this paper, we reveal the road that researchers should follow for understanding ULEs and improving ULEs as they were originally formulated (ULEOs). The paper makes four contributions. First, we show that ULEOs exploit color and, consequently, their effects can be mitigated by simple grayscale pre-filtering, without resorting to adversarial training. Second, we propose an extension to ULEOs, which is called ULEO-GrayAugs, that forces the generated ULEs away from channel-wise color perturbations by making use of grayscale knowledge and data augmentations during optimization. Third, we show that ULEOs generated using Multi-Layer Perceptrons (MLPs) are effective in the case of complex Convolutional Neural Network (CNN) classifiers, suggesting that CNNs suffer specific vulnerability to ULEs. Fourth, we demonstrate that when a classifier is trained on ULEOs, adversarial training will prevent a drop in accuracy measured both on clean images and on adversarial images. Taken together, our contributions represent a substantial advance in the state of art of unlearnable examples, but also reveal important characteristics of their behavior that must be better understood in order to achieve further improvements.

READ FULL TEXT

page 1

page 12

page 13

page 14

page 15

research
02/03/2020

A Differentiable Color Filter for Generating Unrestricted Adversarial Images

We propose Adversarial Color Filtering (AdvCF), an approach that uses a ...
research
11/25/2022

Boundary Adversarial Examples Against Adversarial Overfitting

Standard adversarial training approaches suffer from robust overfitting ...
research
03/14/2022

Adversarial amplitude swap towards robust image classifiers

The vulnerability of convolutional neural networks (CNNs) to image pertu...
research
05/15/2023

Exploiting Frequency Spectrum of Adversarial Images for General Robustness

In recent years, there has been growing concern over the vulnerability o...
research
09/03/2018

A3Net: Adversarial-and-Attention Network for Machine Reading Comprehension

In this paper, we introduce Adversarial-and-attention Network (A3Net) fo...
research
04/22/2023

Improving Stain Invariance of CNNs for Segmentation by Fusing Channel Attention and Domain-Adversarial Training

Variability in staining protocols, such as different slide preparation t...

Please sign up or login with your details

Forgot password? Click here to reset