Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

01/14/2022
by   Yinyi Wei, et al.
0

Recent advances on prompt-tuning cast few-shot classification tasks as a masked language modeling problem. By wrapping input into a template and using a verbalizer which constructs a mapping between label space and label word space, prompt-tuning can achieve excellent results in zero-shot and few-shot scenarios. However, typical prompt-tuning needs a manually designed verbalizer which requires domain expertise and human efforts. And the insufficient label space may introduce considerable bias into the results. In this paper, we focus on eliciting knowledge from pretrained language models and propose a prototypical prompt verbalizer for prompt-tuning. Labels are represented by prototypical embeddings in the feature space rather than by discrete words. The distances between the embedding at the masked position of input and prototypical embeddings are used as classification criterion. For zero-shot settings, knowledge is elicited from pretrained language models by a manually designed template to form initial prototypical embeddings. For few-shot settings, models are tuned to learn meaningful and interpretable prototypical embeddings. Our method optimizes models by contrastive learning. Extensive experimental results on several many-class text classification datasets with low-resource settings demonstrate the effectiveness of our approach compared with other verbalizer construction methods. Our implementation is available at https://github.com/Ydongd/prototypical-prompt-verbalizer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2021

Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

Tuning pre-trained language models (PLMs) with task-specific prompts has...
research
09/14/2023

DePT: Decoupled Prompt Tuning

This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tu...
research
10/26/2022

Don't Prompt, Search! Mining-based Zero-Shot Learning with Language Models

Masked language models like BERT can perform text classification in a ze...
research
03/18/2022

Prototypical Verbalizer for Prompt-based Few-shot Tuning

Prompt-based tuning for pre-trained language models (PLMs) has shown its...
research
06/06/2023

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

Many works employed prompt tuning methods to automatically optimize prom...
research
02/13/2023

Distinguishability Calibration to In-Context Learning

Recent years have witnessed increasing interests in prompt-based learnin...
research
09/06/2021

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

Prompt-based knowledge probing for 1-hop relations has been used to meas...

Please sign up or login with your details

Forgot password? Click here to reset