One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks

05/24/2022
by   Shutong Wu, et al.
0

Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Error-minimizing noise, which is injected to clean data, is one of the most successful methods for preventing DNNs from giving correct predictions on incoming new data. Nonetheless, under specific training strategies such as adversarial training, the unlearnability of error-minimizing noise will severely degrade. In addition, the transferability of error-minimizing noise is inherently limited by the mismatch between the generator model and the targeted learner model. In this paper, we investigate the mechanism of unlearnable examples and propose a novel model-free method, named One-Pixel Shortcut, which only perturbs a single pixel of each image and makes the dataset unlearnable. Our method needs much less computational cost and obtains stronger transferability and thus can protect data from a wide range of different models. Based on this, we further introduce the first unlearnable dataset called CIFAR-10-S, which is indistinguishable from normal CIFAR-10 by human observers and can serve as a benchmark for different models or training strategies to evaluate their abilities to extract critical features from the disturbance of non-semantic representations. The original error-minimizing ULEs will lose efficiency under adversarial training, where the model can get over 83% clean test accuracy. Meanwhile, even if adversarial training and strong data augmentation like RandAugment are applied together, the model trained on CIFAR-10-S cannot get over 50% clean test accuracy.

READ FULL TEXT

page 2

page 4

page 9

research
11/19/2021

Fooling Adversarial Training with Inducing Noise

Adversarial training is widely believed to be a reliable approach to imp...
research
08/19/2020

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Despite their performance, Artificial Neural Networks are not reliable e...
research
02/06/2023

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

While leveraging additional training data is well established to improve...
research
03/10/2023

Do we need entire training data for adversarial training?

Deep Neural Networks (DNNs) are being used to solve a wide range of prob...
research
07/22/2020

Adversarial Training Reduces Information and Improves Transferability

Recent results show that features of adversarially trained networks for ...
research
03/07/2023

CUDA: Convolution-based Unlearnable Datasets

Large-scale training of modern deep learning models heavily relies on pu...
research
12/04/2017

A+D-Net: Shadow Detection with Adversarial Shadow Attenuation

Single image shadow detection is a very challenging problem because of t...

Please sign up or login with your details

Forgot password? Click here to reset