Backdoor Learning on Sequence to Sequence Models

05/03/2023
by   Lichang Chen, et al.
0

Backdoor learning has become an emerging research area towards building a trustworthy machine learning system. While a lot of works have studied the hidden danger of backdoor attacks in image or text classification, there is a limited understanding of the model's robustness on backdoor attacks when the output space is infinite and discrete. In this paper, we study a much more challenging problem of testing whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks. Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence. Furthermore, we utilize Byte Pair Encoding (BPE) to create multiple new triggers, which brings new challenges to backdoor detection since these backdoors are not static. Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2018

Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples

Crafting adversarial examples has become an important technique to evalu...
research
04/11/2019

Membership Inference Attacks on Sequence-to-Sequence Models

Data privacy is an important issue for "machine learning as a service" p...
research
12/05/2018

Neural Abstractive Text Summarization with Sequence-to-Sequence Models

In the past few years, neural abstractive text summarization with sequen...
research
09/10/2023

Machine Translation Models Stand Strong in the Face of Adversarial Attacks

Adversarial attacks expose vulnerabilities of deep learning models by in...
research
08/22/2019

Unsupervised Text Summarization via Mixed Model Back-Translation

Back-translation based approaches have recently lead to significant prog...
research
05/02/2023

Sentiment Perception Adversarial Attacks on Neural Machine Translation Systems

With the advent of deep learning methods, Neural Machine Translation (NM...
research
07/22/2021

Spinning Sequence-to-Sequence Models with Meta-Backdoors

We investigate a new threat to neural sequence-to-sequence (seq2seq) mod...

Please sign up or login with your details

Forgot password? Click here to reset