Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

01/31/2023
by   Zhuoran Liu, et al.
0

Perturbative availability poisoning (PAP) adds small changes to images to prevent their use for model training. Current research adopts the belief that practical and effective approaches to countering such poisons do not exist. In this paper, we argue that it is time to abandon this belief. We present extensive experiments showing that 12 state-of-the-art PAP methods are vulnerable to Image Shortcut Squeezing (ISS), which is based on simple compression. For example, on average, ISS restores the CIFAR-10 model accuracy to 81.73%, surpassing the previous best preprocessing-based countermeasures by 37.97% absolute. ISS also (slightly) outperforms adversarial training and has higher generalizability to unseen perturbation norms and also higher efficiency. Our investigation reveals that the property of PAP perturbations depends on the type of surrogate model used for poison generation, and it explains why a specific ISS compression yields the best performance for a specific type of PAP perturbation. We further test stronger, adaptive poisoning, and show it falls short of being an ideal defense against ISS. Overall, our results demonstrate the importance of considering various (simple) countermeasures to ensure the meaningfulness of analysis carried out during the development of availability poisons.

READ FULL TEXT
research
01/31/2022

Can Adversarial Training Be Manipulated By Non-Robust Features?

Adversarial training, originally designed to resist test-time adversaria...
research
05/06/2020

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security

Input transformation based defense strategies fall short in defending ag...
research
10/17/2019

Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets

Adversarial training is by far the most successful strategy for improvin...
research
07/17/2022

Threat Model-Agnostic Adversarial Defense using Diffusion Models

Deep Neural Networks (DNNs) are highly sensitive to imperceptible malici...
research
08/07/2023

APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses

The efficacy of availability poisoning, a method of poisoning data by in...
research
08/15/2022

A Unified Image Preprocessing Framework For Image Compression

With the development of streaming media technology, increasing communica...
research
02/19/2018

Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression

The rapidly growing body of research in adversarial machine learning has...

Please sign up or login with your details

Forgot password? Click here to reset