Generative Prompt Tuning for Relation Classification

10/22/2022
by   Jiale Han, et al.
0

Using prompts to explore the knowledge contained within pre-trained language models for downstream tasks has now become an active topic. Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces. However, when applied to relation classification exhibiting complex label spaces, vanilla prompt tuning methods may struggle with label verbalizations with arbitrary lengths due to rigid prompt restrictions. Inspired by the text infilling task for pre-training generative models that can flexibly predict missing spans, we propose a novel generative prompt tuning method to reformulate relation classification as an infilling problem, which frees our approach from limitations of current prompt based approaches and thus fully exploits rich semantics of entity and relation types. In addition, we design entity-guided decoding and discriminative relation scoring to generate and align relations effectively and efficiently during inference. Extensive experiments under fully supervised settings and low-resource settings demonstrate the effectiveness of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2023

Rethinking Visual Prompt Learning as Masked Visual Token Modeling

Prompt learning has achieved great success in efficiently exploiting lar...
research
08/31/2021

LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER

Most existing NER methods rely on extensive labeled data for model train...
research
08/17/2020

Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study

Large generative language models such as GPT-2 are well-known for their ...
research
03/01/2022

EPPAC: Entity Pre-typing Relation Classification with Prompt AnswerCentralizing

Relation classification (RC) aims to predict the relationship between a ...
research
05/02/2023

KEPLET: Knowledge-Enhanced Pretrained Language Model with Topic Entity Awareness

In recent years, Pre-trained Language Models (PLMs) have shown their sup...
research
07/28/2022

MLRIP: Pre-training a military language representation model with informative factual knowledge and professional knowledge base

Incorporating prior knowledge into pre-trained language models has prove...

Please sign up or login with your details

Forgot password? Click here to reset