DeepAI AI Chat
Log In Sign Up

Subpopulation Data Poisoning Attacks

06/24/2020
by   Matthew Jagielski, et al.
0

Machine learning (ML) systems are deployed in critical settings, but they might fail in unexpected ways, impacting the accuracy of their predictions. Poisoning attacks against ML induce adversarial modification of data used by an ML algorithm to selectively change the output of the ML algorithm when it is deployed. In this work, we introduce a novel data poisoning attack called a subpopulation attack, which is particularly relevant when datasets are large and diverse. We design a modular framework for subpopulation attacks and show that they are effective for a variety of datasets and ML models. Compared to existing backdoor poisoning attacks, subpopulation attacks have the advantage of not requiring modification of the testing data to induce misclassification. We also provide an impossibility result for defending against subpopulation attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/08/2021

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Machine Learning (ML) models are known to be vulnerable to adversarial i...
03/07/2023

Exploring the Limits of Indiscriminate Data Poisoning Attacks

Indiscriminate data poisoning attacks aim to decrease a model's test acc...
07/17/2020

Design And Modelling An Attack on Multiplexer Based Physical Unclonable Function

This paper deals with study of the physical unclonable functions and spe...
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
12/13/2018

A 0.16pJ/bit Recurrent Neural Network Based PUF for Enhanced Machine Learning Atack Resistance

Physically Unclonable Function (PUF) circuits are finding widespread use...
11/04/2021

Scanflow: A multi-graph framework for Machine Learning workflow management, supervision, and debugging

Machine Learning (ML) is more than just training models, the whole workf...
08/25/2017

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

Many of today's machine learning (ML) systems are not built from scratch...