Broadly Applicable Targeted Data Sample Omission Attacks

05/04/2021
by   Guy Barash, et al.
2

We introduce a novel clean-label targeted poisoning attack on learning mechanisms. While classical poisoning attacks typically corrupt data via addition, modification and omission, our attack focuses on data omission only. Our attack misclassifies a single, targeted test sample of choice, without manipulating that sample. We demonstrate the effectiveness of omission attacks against a large variety of learners including deep neural networks, SVM and decision trees, using several datasets including MNIST, IMDB and CIFAR. The focus of our attack on data omission only is beneficial as well, as it is simpler to implement and analyze. We show that, with a low attack budget, our attack's success rate is above 80 learning. It is systematically above the reference benchmark for black-box learning. For both white-box and black-box cases, changes in model accuracy are negligible, regardless of the specific learner and dataset. We also prove theoretically in a simplified agnostic PAC learning framework that, subject to dataset size and distribution, our omission attack succeeds with high probability against any successful simplified agnostic PAC learner.

READ FULL TEXT

page 5

page 6

research
03/07/2022

Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

Deep neural networks (DNNs) have achieved state-of-the-art performance i...
research
05/18/2021

Learning and Certification under Instance-targeted Poisoning

In this paper, we study PAC learnability and certification under instanc...
research
12/18/2022

Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks

In this work, we study the black-box targeted attack problem from the mo...
research
06/08/2019

Making targeted black-box evasion attacks effective and efficient

We investigate how an adversary can optimally use its query budget for t...
research
05/18/2023

Black-Box Targeted Reward Poisoning Attack Against Online Deep Reinforcement Learning

We propose the first black-box targeted attack against online deep reinf...
research
10/05/2019

Yet another but more efficient black-box adversarial attack: tiling and evolution strategies

We introduce a new black-box attack achieving state of the art performan...
research
12/21/2022

Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks

We introduce camouflaged data poisoning attacks, a new attack vector tha...

Please sign up or login with your details

Forgot password? Click here to reset