Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation

05/19/2023
by   Xuanli He, et al.
0

Modern NLP models are often trained over large untrusted datasets, raising the potential for a malicious adversary to compromise model behaviour. For instance, backdoors can be implanted through crafting training instances with a specific textual trigger and a target label. This paper posits that backdoor poisoning attacks exhibit spurious correlation between simple text features and classification labels, and accordingly, proposes methods for mitigating spurious correlation as means of defence. Our empirical study reveals that the malicious triggers are highly correlated to their target labels; therefore such correlations are extremely distinguishable compared to those scores of benign features, and can be used to filter out potentially problematic instances. Compared with several existing defences, our defence method significantly reduces attack success rates across backdoor attacks, and in the case of insertion based attacks, our method provides a near-perfect defence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2022

Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation

Targeted training-set attacks inject malicious instances into the traini...
research
10/11/2022

Detecting Backdoors in Deep Text Classifiers

Deep neural networks are vulnerable to adversarial attacks, such as back...
research
05/31/2023

Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems

Clean-label (CL) attack is a form of data poisoning attack where an adve...
research
05/25/2023

IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks

Backdoor attacks are an insidious security threat against machine learni...
research
03/02/2018

Label Sanitization against Label Flipping Poisoning Attacks

Many machine learning systems rely on data collected in the wild from un...
research
08/22/2018

DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning

Flow correlation is the core technique used in a multitude of deanonymiz...
research
03/23/2022

An Empirical Study of Memorization in NLP

A recent study by Feldman (2020) proposed a long-tail theory to explain ...

Please sign up or login with your details

Forgot password? Click here to reset