Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering

06/07/2023
by   Jinheon Baek, et al.
0

Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks, based on their internal knowledge stored in parameters during pre-training. However, such internalized knowledge might be insufficient and incorrect, which could lead LLMs to generate factually wrong answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive. To this end, we propose to augment the knowledge directly in the input of LLMs. Specifically, we first retrieve the relevant facts to the input question from the knowledge graph based on semantic similarities between the question and its associated facts. After that, we prepend the retrieved facts to the input question in the form of the prompt, which is then forwarded to LLMs to generate the answer. Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot. We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user's question based on facts over a knowledge graph, on which ours outperforms relevant zero-shot baselines by up to 48

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

Knowledge-driven Self-supervision for Zero-shot Commonsense Question Answering

Recent developments in pre-trained neural language modeling have led to ...
research
09/20/2023

Retrieve-Rewrite-Answer: A KG-to-Text Enhanced LLMs Framework for Knowledge Graph Question Answering

Despite their competitive performance on knowledge-intensive tasks, larg...
research
05/19/2023

Evaluation of medium-large Language Models at zero-shot closed book generative question answering

Large language models (LLMs) have garnered significant attention, but th...
research
06/07/2020

Language Models as Fact Checkers?

Recent work has suggested that language models (LMs) store both common-s...
research
10/07/2022

Calibrating Factual Knowledge in Pretrained Language Models

Previous literature has proved that Pretrained Language Models (PLMs) ca...
research
12/07/2022

Discovering Latent Knowledge in Language Models Without Supervision

Existing techniques for training language models can be misaligned with ...
research
05/25/2020

Knowledge Graph Simple Question Answering for Unseen Domains

Knowledge graph simple question answering (KGSQA), in its standard form,...

Please sign up or login with your details

Forgot password? Click here to reset