Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions

12/20/2022
by   Mayur Patidar, et al.
0

When answering natural language questions over knowledge bases (KBs), incompleteness in the KB can naturally lead to many questions being unanswerable. While answerability has been explored in other QA settings, it has not been studied for QA over knowledge bases (KBQA). We first identify various forms of KB incompleteness that can result in a question being unanswerable. We then propose GrailQAbility, a new benchmark dataset, which systematically modifies GrailQA (a popular KBQA dataset) to represent all these incompleteness issues. Testing two state-of-the-art KBQA models (trained on original GrailQA as well as our GrailQAbility), we find that both models struggle to detect unanswerable questions, or sometimes detect them for the wrong reasons. Consequently, both models suffer significant loss in performance, underscoring the need for further research in making KBQA systems robust to unanswerability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2019

KBQA: Learning Question Answering over QA Corpora and Knowledge Bases

Question answering (QA) has become a popular way for humans to access bi...
research
11/11/2021

A Chinese Multi-type Complex Questions Answering Dataset over Wikidata

Complex Knowledge Base Question Answering is a popular area of research ...
research
08/09/2019

TEQUILA: Temporal Question Answering over Knowledge Bases

Question answering over knowledge bases (KB-QA) poses challenges in hand...
research
06/07/2016

CFO: Conditional Focused Neural Question Answering with Large-scale Knowledge Bases

How can we enable computers to automatically answer questions like "Who ...
research
05/15/2023

MeeQA: Natural Questions in Meeting Transcripts

We present MeeQA, a dataset for natural-language question answering over...
research
08/25/2023

Knowledge-Based Version Incompatibility Detection for Deep Learning

Version incompatibility issues are rampant when reusing or reproducing d...
research
07/05/2023

Won't Get Fooled Again: Answering Questions with False Premises

Pre-trained language models (PLMs) have shown unprecedented potential in...

Please sign up or login with your details

Forgot password? Click here to reset