Online Data Poisoning Attack

03/05/2019
by   Xuezhou Zhang, et al.
0

We study data poisoning attacks in the online learning setting where the training items stream in one at a time, and the adversary perturbs the current training item to manipulate present and future learning. In contrast, prior work on data poisoning attacks has focused on either batch learners in the offline setting, or online learners but with full knowledge of the whole training sequence. We show that online poisoning attack can be formulated as stochastic optimal control, and provide several practical attack algorithms based on control and deep reinforcement learning. Extensive experiments demonstrate the effectiveness of the attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2018

Data Poisoning Attacks against Online Learning

We consider data poisoning attacks, a class of adversarial attacks on ma...
research
05/18/2023

Attacks on Online Learners: a Teacher-Student Analysis

Machine learning models are famously vulnerable to adversarial attacks: ...
research
03/23/2019

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Data poisoning attacks aim to manipulate the model produced by a learnin...
research
04/24/2021

Influence Based Defense Against Data Poisoning Attacks in Online Learning

Data poisoning is a type of adversarial attack on training data where an...
research
02/01/2019

Optimal Adversarial Attack on Autoregressive Models

We investigate optimal adversarial attacks against time series forecast ...
research
06/18/2021

Accumulative Poisoning Attacks on Real-time Data

Collecting training data from untrusted sources exposes machine learning...
research
07/08/2022

Online Evasion Attacks on Recurrent Models:The Power of Hallucinating the Future

Recurrent models are frequently being used in online tasks such as auton...

Please sign up or login with your details

Forgot password? Click here to reset