Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

06/06/2023
by   Yuhang Wang, et al.
0

Many works employed prompt tuning methods to automatically optimize prompt queries and extract the factual knowledge stored in Pretrained Language Models. In this paper, we observe that the optimized prompts, including discrete prompts and continuous prompts, exhibit undesirable object bias. To handle this problem, we propose a novel prompt tuning method called MeCoD. consisting of three modules: Prompt Encoder, Object Equalization and Biased Object Obstruction. Experimental results show that MeCoD can significantly reduce the object bias and at the same time improve accuracy of factual knowledge extraction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2022

Contrastive Demonstration Tuning for Pre-trained Language Models

Pretrained language models can be effectively stimulated by textual prom...
research
11/14/2022

SPE: Symmetrical Prompt Enhancement for Fact Probing

Pretrained language models (PLMs) have been shown to accumulate factual ...
research
01/14/2022

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

Recent advances on prompt-tuning cast few-shot classification tasks as a...
research
01/21/2022

Context-Tuning: Learning Contextualized Prompts for Natural Language Generation

Recently, pretrained language models (PLMs) have made exceptional succes...
research
05/18/2023

CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models

Warning: This paper contains content that may be offensive or upsetting....
research
03/09/2021

BERTese: Learning to Speak to BERT

Large pre-trained language models have been shown to encode large amount...
research
06/09/2022

OptWedge: Cognitive Optimized Guidance toward Off-screen POIs

Guiding off-screen points of interest (POIs) is a practical way of provi...

Please sign up or login with your details

Forgot password? Click here to reset