Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias

05/08/2023
by   Zhiyuan Zhang, et al.
0

Pre-trained Language Models (PLMs) may be poisonous with backdoors or bias injected by the suspicious attacker during the fine-tuning process. A core challenge of purifying potentially poisonous PLMs is precisely finding poisonous dimensions. To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions. According to the relationship between parameter drifts and Hessians of different dimensions, we can detect poisonous dimensions with abnormal dynamics, purify them by resetting them to clean pre-trained weights, and then fine-tune the purified weights on a small clean dataset. To the best of our knowledge, we are the first to study the dynamics guided by the diffusion theory for safety or defense purposes. Experimental results validate the effectiveness of Fine-purifying even with a small clean dataset.

READ FULL TEXT

page 2

page 9

page 21

research
12/22/2021

How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?

The fine-tuning of pre-trained language models has a great success in ma...
research
10/18/2022

Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models

Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attac...
research
04/13/2023

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

Diffusion models have proven to be highly effective in generating high-q...
research
06/17/2021

An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models

The performance of fine-tuning pre-trained language models largely depen...
research
10/06/2019

Transforming the output of GANs by fine-tuning them with features from different datasets

In this work we present a method for fine-tuning pre-trained GANs with f...
research
04/15/2022

Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts

Due to the superior performance, large-scale pre-trained language models...
research
05/19/2023

Self-Agreement: A Framework for Fine-tuning Language Models to Find Agreement among Diverse Opinions

Finding an agreement among diverse opinions is a challenging topic in mu...

Please sign up or login with your details

Forgot password? Click here to reset