Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models

10/14/2021
by   Xin Zhou, et al.
5

Plug-and-play functionality allows deep learning models to adapt well to different tasks without requiring any parameters modified. Recently, prefix-tuning was shown to be a plug-and-play method on various text generation tasks by simply inserting corresponding continuous vectors into the inputs. However, sequence labeling tasks invalidate existing plug-and-play methods since different label sets demand changes to the architecture of the model classifier. In this work, we propose the use of label word prediction instead of classification to totally reuse the architecture of pre-trained models for sequence labeling tasks. Specifically, for each task, a label word set is first constructed by selecting a high-frequency word for each class respectively, and then, task-specific vectors are inserted into the inputs and optimized to manipulate the model predictions towards the corresponding label words. As a result, by simply switching the plugin vectors on the input, a frozen pre-trained language model is allowed to perform different tasks. Experimental results on three sequence labeling tasks show that the performance of the proposed method can achieve comparable performance with standard fine-tuning with only 0.1% task-specific parameters. In addition, our method is up to 70 times faster than non-plug-and-play methods while switching different tasks under the resource-constrained scenario.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/31/2022

Visual Prompting: Modifying Pixel Space to Adapt Pre-trained Models

Prompting has recently become a popular paradigm for adapting language m...
research
04/08/2020

Exploiting Redundancy in Pre-trained Language Models for Efficient Transfer Learning

Large pre-trained contextual word representations have transformed the f...
research
07/09/2019

To Tune or Not To Tune? How About the Best of Both Worlds?

The introduction of pre-trained language models has revolutionized natur...
research
09/28/2021

Template-free Prompt Tuning for Few-shot NER

Prompt-based methods have been successfully applied in sentence-level fe...
research
08/04/2021

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

Tuning pre-trained language models (PLMs) with task-specific prompts has...
research
06/01/2023

Prompt Algebra for Task Composition

We investigate whether prompts learned independently for different tasks...
research
10/09/2021

X-model: Improving Data Efficiency in Deep Learning with A Minimax Model

To mitigate the burden of data labeling, we aim at improving data effici...

Please sign up or login with your details

Forgot password? Click here to reset