Subpopulation Data Poisoning Attacks

06/24/2020
by   Matthew Jagielski, et al.
0

Machine learning (ML) systems are deployed in critical settings, but they might fail in unexpected ways, impacting the accuracy of their predictions. Poisoning attacks against ML induce adversarial modification of data used by an ML algorithm to selectively change the output of the ML algorithm when it is deployed. In this work, we introduce a novel data poisoning attack called a subpopulation attack, which is particularly relevant when datasets are large and diverse. We design a modular framework for subpopulation attacks and show that they are effective for a variety of datasets and ML models. Compared to existing backdoor poisoning attacks, subpopulation attacks have the advantage of not requiring modification of the testing data to induce misclassification. We also provide an impossibility result for defending against subpopulation attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2021

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Machine Learning (ML) models are known to be vulnerable to adversarial i...
research
03/07/2023

Exploring the Limits of Indiscriminate Data Poisoning Attacks

Indiscriminate data poisoning attacks aim to decrease a model's test acc...
research
07/17/2020

Design And Modelling An Attack on Multiplexer Based Physical Unclonable Function

This paper deals with study of the physical unclonable functions and spe...
research
11/04/2021

Scanflow: A multi-graph framework for Machine Learning workflow management, supervision, and debugging

Machine Learning (ML) is more than just training models, the whole workf...
research
08/25/2017

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...
research
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
research
05/18/2022

Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution

Due to cost and time-to-market constraints, many industries outsource th...

Please sign up or login with your details

Forgot password? Click here to reset