A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement

03/21/2022
by   Yuting Yang, et al.
0

Recent years have seen the wide application of NLP models in crucial areas such as finance, medical treatment, and news media, raising concerns of the model robustness and vulnerabilities. In this paper, we propose a novel prompt-based adversarial attack to compromise NLP models and robustness enhancement technique. We first construct malicious prompts for each instance and generate adversarial examples via mask-and-filling under the effect of a malicious purpose. Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained language models (PLMs). Furthermore, we design a prompt-based adversarial training method to improve the robustness of PLMs. As our training method does not actually generate adversarial samples, it can be applied to large-scale training sets efficiently. The experimental results show that our attack method can achieve a high attack success rate with more diverse, fluent and natural adversarial examples. In addition, our robustness enhancement method can significantly improve the robustness of models to resist adversarial attacks. Our work indicates that prompting paradigm has great potential in probing some fundamental flaws of PLMs and fine-tuning them for downstream tasks.

READ FULL TEXT
research
07/04/2023

SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification

Despite their promising performance across various natural language proc...
research
06/08/2023

Expanding Scope: Adapting English Adversarial Attacks to Chinese

Recent studies have revealed that NLP predictive models are vulnerable t...
research
11/04/2021

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Large-scale pre-trained language models have achieved tremendous success...
research
05/18/2020

An Evasion Attack against ML-based Phishing URL Detectors

Background: Over the year, Machine Learning Phishing URL classification ...
research
05/08/2023

Toward Adversarial Training on Contextualized Language Representation

Beyond the success story of adversarial training (AT) in the recent text...
research
04/23/2020

On Adversarial Examples for Biomedical NLP Tasks

The success of pre-trained word embeddings has motivated its use in task...
research
09/16/2020

Contextualized Perturbation for Textual Adversarial Attack

Adversarial examples expose the vulnerabilities of natural language proc...

Please sign up or login with your details

Forgot password? Click here to reset