HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks

09/15/2023
by   Minh-Hao Van, et al.
0

While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit. In this work, we propose an efficient and robust training approach to defend against data poisoning attacks based on influence functions, named Healthy Influential-Noise based Training. Using influence functions, we craft healthy noise that helps to harden the classification model against poisoning attacks without significantly affecting the generalization ability on test data. In addition, our method can perform effectively when only a subset of the training data is modified, instead of the current method of adding noise to all examples that has been used in several previous works. We conduct comprehensive evaluations over two image datasets with state-of-the-art poisoning attacks under different realistic attack scenarios. Our empirical results show that HINT can efficiently protect deep learning models against the effect of both untargeted and targeted poisoning attacks.

READ FULL TEXT

page 5

page 8

page 9

research
10/18/2022

Not All Poisons are Created Equal: Robust Training against Data Poisoning

Data poisoning causes misclassification of test time target examples by ...
research
02/27/2023

Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks

Bit-flip attacks (BFAs) have attracted substantial attention recently, i...
research
03/03/2023

NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning

At present, backdoor attacks attract attention as they do great harm to ...
research
04/24/2021

Influence Based Defense Against Data Poisoning Attacks in Online Learning

Data poisoning is a type of adversarial attack on training data where an...
research
01/26/2021

Property Inference From Poisoning

Property inference attacks consider an adversary who has access to the t...
research
12/03/2019

Less Is Better: Unweighted Data Subsampling via Influence Function

In the time of Big Data, training complex models on large-scale data set...
research
08/14/2022

Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks

A powerful category of data poisoning attacks modify a subset of trainin...

Please sign up or login with your details

Forgot password? Click here to reset