SPE: Symmetrical Prompt Enhancement for Fact Probing

11/14/2022
by   Yiyuan Li, et al.
0

Pretrained language models (PLMs) have been shown to accumulate factual knowledge during pretrainingng (Petroni et al., 2019). Recent works probe PLMs for the extent of this knowledge through prompts either in discrete or continuous forms. However, these methods do not consider symmetry of the task: object prediction and subject prediction. In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for factual probing in PLMs that leverages the symmetry of the task by constructing symmetrical prompts for subject and object prediction. Our results on a popular factual probing dataset, LAMA, show significant improvement of SPE over previous probing methods.

READ FULL TEXT
research
06/06/2023

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

Many works employed prompt tuning methods to automatically optimize prom...
research
05/11/2021

Improved LCAs for constructing spanners

In this paper we study the problem of constructing spanners in a local m...
research
04/14/2021

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

Natural-language prompts have recently been used to coax pretrained lang...
research
02/14/2023

Exploring Category Structure with Contextual Language Models and Lexical Semantic Networks

Recent work on predicting category structure with distributional models,...
research
10/16/2020

Inferring symmetry in natural language

We present a methodological framework for inferring symmetry of verb pre...
research
09/12/2023

Do PLMs Know and Understand Ontological Knowledge?

Ontological knowledge, which comprises classes and properties and their ...
research
04/12/2021

Does My Representation Capture X? Probe-Ably

Probing (or diagnostic classification) has become a popular strategy for...

Please sign up or login with your details

Forgot password? Click here to reset