Training-free Lexical Backdoor Attacks on Language Models

02/08/2023
by   Yujin Huang, et al.
0

Large-scale language models have achieved tremendous success across various natural language processing (NLP) applications. Nevertheless, language models are vulnerable to backdoor attacks, which inject stealthy triggers into models for steering them to undesirable behaviors. Most existing backdoor attacks, such as data poisoning, require further (re)training or fine-tuning language models to learn the intended backdoor patterns. The additional training process however diminishes the stealthiness of the attacks, as training a language model usually requires long optimization time, a massive amount of data, and considerable modifications to the model parameters. In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models. Our attack is achieved by injecting lexical triggers into the tokenizer of a language model via manipulating its embedding dictionary using carefully designed rules. These rules are explainable to human developers which inspires attacks from a wider range of hackers. The sparse manipulation of the dictionary also habilitates the stealthiness of our attack. We conduct extensive experiments on three dominant NLP tasks based on nine language models to demonstrate the effectiveness and universality of our attack. The code of this work is available at https://github.com/Jinxhy/TFLexAttack.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2022

MSDT: Masked Language Model Scoring Defense in Text Domain

Pre-trained language models allowed us to process downstream tasks with ...
research
11/17/2022

Ignore Previous Prompt: Attack Techniques For Language Models

Transformer-based large language models (LLMs) provide a powerful founda...
research
05/01/2021

Hidden Backdoors in Human-Centric Language Models

Natural language processing (NLP) systems have been proven to be vulnera...
research
03/06/2023

On the Feasibility of Specialized Ability Extracting for Large Language Code Models

Recent progress in large language code models (LLCMs) has led to a drama...
research
03/06/2023

Spelling convention sensitivity in neural language models

We examine whether large neural language models, trained on very large c...
research
07/27/2023

Backdoor Attacks for In-Context Learning with Language Models

Because state-of-the-art language models are expensive to train, most pr...
research
05/15/2023

Memorization for Good: Encryption with Autoregressive Language Models

Over-parameterized neural language models (LMs) can memorize and recite ...

Please sign up or login with your details

Forgot password? Click here to reset