Detecting Backdoors in Deep Text Classifiers

10/11/2022
by   You Guo, et al.
0

Deep neural networks are vulnerable to adversarial attacks, such as backdoor attacks in which a malicious adversary compromises a model during training such that specific behaviour can be triggered at test time by attaching a specific word or phrase to an input. This paper considers the problem of diagnosing whether a model has been compromised and if so, identifying the backdoor trigger. We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models, without prior knowledge of the attack type, nor does our method require access to any (potentially compromised) training resources. Our experiments show that our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning, across a range of text classification tasks and model architectures. Our code will be made publicly available upon acceptance.

READ FULL TEXT
research
05/29/2019

A backdoor attack against LSTM-based text classification systems

With the widespread use of deep learning system in many applications, th...
research
10/21/2022

TCAB: A Large-Scale Text Classification Attack Benchmark

We introduce the Text Classification Attack Benchmark (TCAB), a dataset ...
research
07/11/2020

Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification

It has been proved that deep neural networks are facing a new threat cal...
research
09/07/2021

Trojan Signatures in DNN Weights

Deep neural networks have been shown to be vulnerable to backdoor, or tr...
research
05/19/2023

Mitigating Backdoor Poisoning Attacks through the Lens of Spurious Correlation

Modern NLP models are often trained over large untrusted datasets, raisi...
research
09/01/2019

Topics to Avoid: Demoting Latent Confounds in Text Classification

Despite impressive performance on many text classification tasks, deep n...
research
10/23/2020

Customizing Triggers with Concealed Data Poisoning

Adversarial attacks alter NLP model predictions by perturbing test-time ...

Please sign up or login with your details

Forgot password? Click here to reset