Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

10/18/2022
by   Zhiyuan Zhang, et al.
0

Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks. In Natural Language Processing (NLP), DNNs are often backdoored during the fine-tuning process of a large-scale Pre-trained Language Model (PLM) with poisoned samples. Although the clean weights of PLMs are readily available, existing methods have ignored this information in defending NLP models against backdoor attacks. In this work, we take the first step to exploit the pre-trained (unfine-tuned) weights to mitigate backdoors in fine-tuned language models. Specifically, we leverage the clean pre-trained weights via two complementary techniques: (1) a two-step Fine-mixing technique, which first mixes the backdoored weights (fine-tuned on poisoned data) with the pre-trained weights, then fine-tunes the mixed weights on a small subset of clean data; (2) an Embedding Purification (E-PUR) technique, which mitigates potential backdoors existing in the word embeddings. We compare Fine-mixing with typical backdoor mitigation methods on three single-sentence sentiment classification tasks and two sentence-pair classification tasks and show that it outperforms the baselines by a considerable margin in all scenarios. We also show that our E-PUR method can benefit existing mitigation methods. Our work establishes a simple but strong baseline defense for secure fine-tuned NLP models against backdoor attacks.

READ FULL TEXT

page 9

page 16

page 17

page 18

research
12/22/2021

How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?

The fine-tuning of pre-trained language models has a great success in ma...
research
07/14/2022

Active Data Pattern Extraction Attacks on Generative Language Models

With the wide availability of large pre-trained language model checkpoin...
research
09/03/2020

Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models

Advances in language modeling have led to the development of deep attent...
research
05/08/2023

Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias

Pre-trained Language Models (PLMs) may be poisonous with backdoors or bi...
research
05/02/2019

Directing DNNs Attention for Facial Attribution Classification using Gradient-weighted Class Activation Mapping

Deep neural networks (DNNs) have a high accuracy on image classification...
research
12/19/2022

Dataless Knowledge Fusion by Merging Weights of Language Models

Fine-tuning pre-trained language models has become the prevalent paradig...
research
08/31/2021

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

Pre-Trained Models have been widely applied and recently proved vulnerab...

Please sign up or login with your details

Forgot password? Click here to reset