APBench: A Unified Benchmark for Availability Poisoning Attacks and Defenses

08/07/2023
by   Tianrui Qin, et al.
0

The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible perturbations to prevent its use in model training, has been a hot subject of investigation. Previous research suggested that it was difficult to effectively counteract such poisoning attacks. However, the introduction of various defense methods has challenged this notion. Due to the rapid progress in this field, the performance of different novel methods cannot be accurately validated due to variations in experimental setups. To further evaluate the attack and defense capabilities of these poisoning methods, we have developed a benchmark – APBench for assessing the efficacy of adversarial poisoning. APBench consists of 9 state-of-the-art availability poisoning attacks, 8 defense algorithms, and 4 conventional data augmentation techniques. We also have set up experiments with varying different poisoning ratios, and evaluated the attacks on multiple datasets and their transferability across model architectures. We further conducted a comprehensive evaluation of 2 additional attacks specifically targeting unsupervised models. Our results reveal the glaring inadequacy of existing attacks in safeguarding individual privacy. APBench is open source and available to the deep learning community: https://github.com/lafeat/apbench.

READ FULL TEXT

page 3

page 4

page 8

page 15

page 16

research
05/13/2020

DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses

DeepRobust is a PyTorch adversarial learning library which aims to build...
research
06/25/2022

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

Backdoor learning is an emerging and important topic of studying the vul...
research
12/13/2020

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Public resources and services (e.g., datasets, training platforms, pre-t...
research
12/20/2022

Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation

Open software supply chain attacks, once successful, can exact heavy cos...
research
03/27/2023

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks

Unlearnable example attacks are data poisoning techniques that can be us...
research
01/31/2023

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

Perturbative availability poisoning (PAP) adds small changes to images t...
research
02/22/2023

ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning Paradigms

Backdoor data detection is traditionally studied in an end-to-end superv...

Please sign up or login with your details

Forgot password? Click here to reset