The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

03/23/2021
by   Antonio Emanuele Cinà, et al.
7

One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time. Availability poisoning is a particularly worrisome subset of poisoning attacks where the attacker aims to cause a Denial-of-Service (DoS) attack. However, the state-of-the-art algorithms are computationally expensive because they try to solve a complex bi-level optimization problem (the "hammer"). We observed that in particular conditions, namely, where the target model is linear (the "nut"), the usage of computationally costly procedures can be avoided. We propose a counter-intuitive but efficient heuristic that allows contaminating the training set such that the target system's performance is highly compromised. We further suggest a re-parameterization trick to decrease the number of variables to be optimized. Finally, we demonstrate that, under the considered settings, our framework achieves comparable, or even better, performances in terms of the attacker's objective while being significantly more computationally efficient.

READ FULL TEXT
research
04/03/2018

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Data poisoning is a type of adversarial attack on machine learning model...
research
04/12/2021

A Backdoor Attack against 3D Point Cloud Classifiers

Vulnerability of 3D point cloud (PC) classifiers has become a grave conc...
research
10/12/2022

Few-shot Backdoor Attacks via Neural Tangent Kernels

In a backdoor attack, an attacker injects corrupted examples into the tr...
research
05/31/2021

Gradient-based Data Subversion Attack Against Binary Classifiers

Machine learning based data-driven technologies have shown impressive pe...
research
09/02/2021

Excess Capacity and Backdoor Poisoning

A backdoor data poisoning attack is an adversarial attack wherein the at...
research
04/19/2021

Manipulating SGD with Data Ordering Attacks

Machine learning is vulnerable to a wide variety of different attacks. I...

Please sign up or login with your details

Forgot password? Click here to reset