An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

09/06/2021
by   Tianxing He, et al.
52

Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models. Existing work uses considerable amounts of data to tune the prompts for better performance. In this work, we compare a variety of approaches under a few-shot knowledge probing setting, where only a small number (e.g., 10 or 20) of example triples are available. In addition, we create a new dataset named TREx-2p, which contains 2-hop relations. We report that few-shot examples can strongly boost the probing performance for both 1-hop and 2-hop relations. In particular, we find that a simple-yet-effective approach of finetuning the bias vectors in the model outperforms existing prompt-engineering methods. Our dataset and code are available at <https://github.com/cloudygoose/fewshot_lama>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2021

Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge

Cant is important for understanding advertising, comedies and dog-whistl...
research
06/03/2021

Few-shot Knowledge Graph-to-Text Generation with Pretrained Language Models

This paper studies how to automatically generate a natural language text...
research
08/30/2019

Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations

Multi-hop knowledge graph (KG) reasoning is an effective and explainable...
research
02/07/2022

To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment

There has been mounting evidence that pretrained language models fine-tu...
research
01/14/2022

Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt Verbalizer

Recent advances on prompt-tuning cast few-shot classification tasks as a...
research
09/14/2021

Building Accurate Simple Models with Multihop

Knowledge transfer from a complex high performing model to a simpler and...
research
09/15/2022

VIPHY: Probing "Visible" Physical Commonsense Knowledge

In recent years, vision-language models (VLMs) have shown remarkable per...

Please sign up or login with your details

Forgot password? Click here to reset