Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

08/04/2021
by   Shengding Hu, et al.
0

Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification. Particularly, previous studies suggest that prompt-tuning has remarkable superiority in the low-data scenario over the generic fine-tuning methods with extra classifiers. The core idea of prompt-tuning is to insert text pieces, i.e., template, to the input and transform a classification problem into a masked language modeling problem, where a crucial step is to construct a projection, i.e., verbalizer, between a label space and a label word space. A verbalizer is usually handcrafted or searched by gradient descent, which may lack coverage and bring considerable bias and high variances to the results. In this work, we focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt-tuning (KPT), to improve and stabilize prompt-tuning. Specifically, we expand the label word space of the verbalizer using external knowledge bases (KBs) and refine the expanded label word space with the PLM itself before predicting with the expanded label word space. Extensive experiments on zero and few-shot text classification tasks demonstrate the effectiveness of knowledgeable prompt-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2022

Prompt-Learning for Short Text Classification

In the short text, the extreme short length, feature sparsity and high a...
research
01/14/2022

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

Recent advances on prompt-tuning cast few-shot classification tasks as a...
research
06/18/2023

Evolutionary Verbalizer Search for Prompt-based Few Shot Text Classification

Recent advances for few-shot text classification aim to wrap textual inp...
research
11/30/2022

Learning Label Modular Prompts for Text Classification in the Wild

Machine learning models usually assume i.i.d data during training and te...
research
10/29/2022

STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification

The effectiveness of prompt learning has been demonstrated in different ...
research
10/14/2021

Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models

Plug-and-play functionality allows deep learning models to adapt well to...
research
09/14/2023

Ambiguity-Aware In-Context Learning with Large Language Models

In-context learning (ICL) i.e. showing LLMs only a few task-specific dem...

Please sign up or login with your details

Forgot password? Click here to reset