Model-tuning Via Prompts Makes NLP Models Adversarially Robust

03/13/2023
by   Mrigank Raman, et al.
0

In recent years, NLP practitioners have converged on the following practice: (i) import an off-the-shelf pretrained (masked) language model; (ii) append a multilayer perceptron atop the CLS token's hidden representation (with randomly initialized weights); and (iii) fine-tune the entire model on a downstream task (MLP). This procedure has produced massive gains on standard NLP benchmarks, but these models remain brittle, even to mild adversarial perturbations, such as word-level synonym substitutions. In this work, we demonstrate surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP), an alternative method of adapting to downstream tasks. Rather than modifying the model (by appending an MLP head), MVP instead modifies the input (by appending a prompt template). Across three classification datasets, MVP improves performance against adversarial word-level synonym substitutions by an average of 8 state-of-art defenses by 3.5 achieve further improvements in robust accuracy while maintaining clean accuracy. Finally, we conduct ablations to investigate the mechanism underlying these gains. Notably, we find that the main causes of vulnerability of MLP can be attributed to the misalignment between pre-training and fine-tuning tasks, and the randomly initialized MLP parameters. Code is available at https://github.com/acmi-lab/mvp

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2022

DynaMaR: Dynamic Prompt with Mask Token Representation

Recent research has shown that large language models pretrained using un...
research
10/18/2022

ROSE: Robust Selective Fine-tuning for Pre-trained Language Models

Even though the large-scale language models have achieved excellent perf...
research
03/28/2020

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Pretrained models from self-supervision are prevalently used in fine-tun...
research
06/14/2022

LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

Fine-tuning pretrained language models (LMs) without making any architec...
research
08/23/2023

DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration

The visual models pretrained on large-scale benchmarks encode general kn...
research
04/14/2021

Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little

A possible explanation for the impressive performance of masked language...
research
10/30/2022

Parameter-Efficient Tuning Makes a Good Classification Head

In recent years, pretrained models revolutionized the paradigm of natura...

Please sign up or login with your details

Forgot password? Click here to reset