Indiscriminate Data Poisoning Attacks on Neural Networks

04/19/2022
by   Yiwei Lu, et al.
0

Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective on neural networks. We present efficient implementations that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of tens of thousands of poisoned points, in contrast to existing methods that generate poisoned points one by one. We further perform extensive experiments that empirically explore the effect of data poisoning attacks on deep neural networks.

READ FULL TEXT

page 9

page 10

research
03/02/2018

Label Sanitization against Label Flipping Poisoning Attacks

Many machine learning systems rely on data collected in the wild from un...
research
06/14/2023

Efficient Backdoor Attacks for Deep Neural Networks in Real-world Scenarios

Recent deep neural networks (DNNs) have come to rely on vast amounts of ...
research
11/30/2021

Human Imperceptible Attacks and Applications to Improve Fairness

Modern neural networks are able to perform at least as well as humans in...
research
05/08/2021

Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility

A recent line of work has shown that deep networks are highly susceptibl...
research
07/17/2023

Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound

Deep neural networks (DNNs) have been widely and successfully adopted an...
research
12/30/2021

Few-shot Backdoor Defense Using Shapley Estimation

Deep neural networks have achieved impressive performance in a variety o...
research
11/29/2021

A General Framework for Defending Against Backdoor Attacks via Influence Graph

In this work, we propose a new and general framework to defend against b...

Please sign up or login with your details

Forgot password? Click here to reset