BERTese: Learning to Speak to BERT

03/09/2021
by   Adi Haviv, et al.
0

Large pre-trained language models have been shown to encode large amounts of world and commonsense knowledge in their parameters, leading to substantial interest in methods for extracting that knowledge. In past work, knowledge was extracted by taking manually-authored queries and gathering paraphrases for them using a separate pipeline. In this work, we propose a method for automatically rewriting queries into "BERTese", a paraphrase query that is directly optimized towards better knowledge extraction. To encourage meaningful rewrites, we add auxiliary loss functions that encourage the query to correspond to actual language tokens. We empirically show our approach outperforms competing baselines, obviating the need for complex pipelines. Moreover, BERTese provides some insight into the type of language that helps language models perform knowledge extraction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2021

How to Query Language Models?

Large pre-trained language models (LMs) are capable of not only recoveri...
research
05/17/2022

SKILL: Structured Knowledge Infusion for Large Language Models

Large language models (LLMs) have demonstrated human-level performance o...
research
08/10/2020

Does BERT Solve Commonsense Task via Commonsense Knowledge?

The success of pre-trained contextualized language models such as BERT m...
research
05/22/2023

Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases

We introduce Chain of Knowledge (CoK), a framework that augments large l...
research
10/14/2021

P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts

Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the qualit...
research
05/20/2023

Learning Horn Envelopes via Queries from Large Language Models

We investigate an approach for extracting knowledge from trained neural ...
research
06/06/2023

Towards Alleviating the Object Bias in Prompt Tuning-based Factual Knowledge Extraction

Many works employed prompt tuning methods to automatically optimize prom...

Please sign up or login with your details

Forgot password? Click here to reset