Code-Style In-Context Learning for Knowledge-Based Question Answering

09/09/2023
by   Zhijie Nie, et al.
0

Current methods for Knowledge-Based Question Answering (KBQA) usually rely on complex training techniques and model frameworks, leading to many limitations in practical applications. Recently, the emergence of In-Context Learning (ICL) capabilities in Large Language Models (LLMs) provides a simple and training-free semantic parsing paradigm for KBQA: Given a small number of questions and their labeled logical forms as demo examples, LLMs can understand the task intent and generate the logic form for a new question. However, current powerful LLMs have little exposure to logic forms during pre-training, resulting in a high format error rate. To solve this problem, we propose a code-style in-context learning method for KBQA, which converts the generation process of unfamiliar logical form into the more familiar code generation process for LLMs. Experimental results on three mainstream datasets show that our method dramatically mitigated the formatting error problem in generating logic forms while realizing a new SOTA on WebQSP, GrailQA, and GraphQ under the few-shot setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2019

Effective Search of Logical Forms for Weakly Supervised Knowledge-Based Question Answering

Many algorithms for Knowledge-Based Question Answering (KBQA) depend on ...
research
05/02/2023

Few-shot In-context Learning for Knowledge Base Question Answering

Question answering over knowledge bases is considered a difficult proble...
research
04/05/2020

TAPAS: Weakly Supervised Table Parsing via Pre-training

Answering natural language questions over tables is usually seen as a se...
research
05/27/2019

Compositional pre-training for neural semantic parsing

Semantic parsing is the process of translating natural language utteranc...
research
04/18/2021

Case-based Reasoning for Natural Language Queries over Knowledge Bases

It is often challenging for a system to solve a new complex problem from...
research
09/20/2018

Symbolic Priors for RNN-based Semantic Parsing

Seq2seq models based on Recurrent Neural Networks (RNNs) have recently r...
research
10/24/2022

TIARA: Multi-grained Retrieval for Robust Question Answering over Large Knowledge Bases

Pre-trained language models (PLMs) have shown their effectiveness in mul...

Please sign up or login with your details

Forgot password? Click here to reset