A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

06/17/2022
by   Ganqu Cui, et al.
0

Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.

READ FULL TEXT
research
10/19/2022

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

Textual adversarial samples play important roles in multiple subfields o...
research
11/08/2021

Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning

Adversarial attacks on graphs have posed a major threat to the robustnes...
research
12/16/2020

TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)

Neural backdoors represent one primary threat to the security of deep le...
research
03/03/2023

NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning

At present, backdoor attacks attract attention as they do great harm to ...
research
07/12/2023

SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark

Membership inference (MI) attacks threaten user privacy through determin...
research
12/13/2020

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Public resources and services (e.g., datasets, training platforms, pre-t...
research
04/27/2023

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

Textual backdoor attacks pose a practical threat to existing systems, as...

Please sign up or login with your details

Forgot password? Click here to reset