What Can We Learn from Unlearnable Datasets?

05/30/2023
by   Pedro Sandoval Segura, et al.
0

In an era of widespread web scraping, unlearnable dataset methods have the potential to protect data privacy by preventing deep neural networks from generalizing. But in addition to a number of practical limitations that make their use unlikely, we make a number of findings that call into question their ability to safeguard data. First, it is widely believed that neural networks trained on unlearnable datasets only learn shortcuts, simpler rules that are not useful for generalization. In contrast, we find that networks actually can learn useful features that can be reweighed for high test performance, suggesting that image privacy is not preserved. Unlearnable datasets are also believed to induce learning shortcuts through linear separability of added perturbations. We provide a counterexample, demonstrating that linear separability of perturbations is not a necessary condition. To emphasize why linearly separable perturbations should not be relied upon, we propose an orthogonal projection attack which allows learning from unlearnable datasets published in ICML 2021 and ICLR 2023. Our proposed attack is significantly less complex than recently proposed techniques.

READ FULL TEXT

page 2

page 7

page 15

page 16

page 17

research
05/16/2020

Universal Adversarial Perturbations: A Survey

Over the past decade, Deep Learning has emerged as a useful and efficien...
research
07/19/2020

Exploiting vulnerabilities of deep neural networks for privacy protection

Adversarial perturbations can be added to images to protect their conten...
research
10/23/2020

Learn Robust Features via Orthogonal Multi-Path

It is now widely known that by adversarial attacks, clean images with in...
research
11/18/2018

DeepConsensus: using the consensus of features from multiple layers to attain robust image classification

We consider a classifier whose test set is exposed to various perturbati...
research
08/05/2019

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Deep neural networks are vulnerable to adversarial perturbations: small ...
research
11/01/2021

Indiscriminate Poisoning Attacks Are Shortcuts

Indiscriminate data poisoning attacks, which add imperceptible perturbat...
research
12/16/2019

On privacy preserving data release of linear dynamic networks

Distributed data sharing in dynamic networks is ubiquitous. It raises th...

Please sign up or login with your details

Forgot password? Click here to reset