Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks

04/30/2023
by   Jingfeng Zhang, et al.
1

Adversarial training (AT) is a robust learning algorithm that can defend against adversarial attacks in the inference phase and mitigate the side effects of corrupted data in the training phase. As such, it has become an indispensable component of many artificial intelligence (AI) systems. However, in high-stake AI applications, it is crucial to understand AT's vulnerabilities to ensure reliable deployment. In this paper, we investigate AT's susceptibility to poisoning attacks, a type of malicious attack that manipulates training data to compromise the performance of the trained model. Previous work has focused on poisoning attacks against standard training, but little research has been done on their effectiveness against AT. To fill this gap, we design and test effective poisoning attacks against AT. Specifically, we investigate and design clean-label poisoning attacks, allowing attackers to imperceptibly modify a small fraction of training data to control the algorithm's behavior on a specific target data point. Additionally, we propose the clean-label untargeted attack, enabling attackers can attach tiny stickers on training data to degrade the algorithm's performance on all test data, where the stickers could serve as a signal against unauthorized data collection. Our experiments demonstrate that AT can still be poisoned, highlighting the need for caution when using vanilla AT algorithms in security-related applications. The code is at https://github.com/zjfheart/Poison-adv-training.git.

READ FULL TEXT

page 2

page 5

page 9

page 10

page 12

research
09/29/2019

Strong Baseline Defenses Against Clean-Label Poisoning Attacks

Targeted clean-label poisoning is a type of adversarial attack on machin...
research
07/20/2021

Secure Access Control for DAG-based Distributed Ledgers

Access control is a fundamental component of the design of distributed l...
research
06/10/2022

Enhancing Clean Label Backdoor Attack with Two-phase Specific Triggers

Backdoor attacks threaten Deep Neural Networks (DNNs). Towards stealthin...
research
03/03/2023

TrojText: Test-time Invisible Textual Trojan Insertion

In Natural Language Processing (NLP), intelligent neuron models can be s...
research
03/18/2020

Vulnerabilities of Connectionist AI Applications: Evaluation and Defence

This article deals with the IT security of connectionist artificial inte...
research
12/20/2022

Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation

Open software supply chain attacks, once successful, can exact heavy cos...

Please sign up or login with your details

Forgot password? Click here to reset