Poisons that are learned faster are more effective

04/19/2022
by   Pedro Sandoval Segura, et al.
9

Imperceptible poisoning attacks on entire datasets have recently been touted as methods for protecting data privacy. However, among a number of defenses preventing the practical use of these techniques, early-stopping stands out as a simple, yet effective defense. To gauge poisons' vulnerability to early-stopping, we benchmark error-minimizing, error-maximizing, and synthetic poisons in terms of peak test accuracy over 100 epochs and make a number of surprising observations. First, we find that poisons that reach a low training loss faster have lower peak test accuracy. Second, we find that a current state-of-the-art error-maximizing poison is 7 times less effective when poison training is stopped at epoch 8. Third, we find that stronger, more transferable adversarial attacks do not make stronger poisons. We advocate for evaluating poisons in terms of peak test accuracy.

READ FULL TEXT
research
11/30/2020

Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

Advances in the development of adversarial attacks have been fundamental...
research
02/26/2020

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Adversarial training based on the minimax formulation is necessary for o...
research
04/19/2021

LAFEAT: Piercing Through Adversarial Defenses with Latent Features

Deep convolutional neural networks are susceptible to adversarial attack...
research
04/01/2019

Sound source ranging using a feed-forward neural network with fitting-based early stopping

When a feed-forward neural network (FNN) is trained for source ranging i...
research
02/23/2021

Automated Discovery of Adaptive Attacks on Adversarial Defenses

Reliable evaluation of adversarial defenses is a challenging task, curre...
research
09/13/2019

DARTS+: Improved Differentiable Architecture Search with Early Stopping

Recently, there has been a growing interest in automating the process of...
research
12/15/2022

Alternating Objectives Generates Stronger PGD-Based Adversarial Attacks

Designing powerful adversarial attacks is of paramount importance for th...

Please sign up or login with your details

Forgot password? Click here to reset