Won't Get Fooled Again: Answering Questions with False Premises

07/05/2023
by   Shengding Hu, et al.
0

Pre-trained language models (PLMs) have shown unprecedented potential in various fields, especially as the backbones for question-answering (QA) systems. However, they tend to be easily deceived by tricky questions such as "How many eyes does the sun have?". Such frailties of PLMs often allude to the lack of knowledge within them. In this paper, we find that the PLMs already possess the knowledge required to rebut such questions, and the key is how to activate the knowledge. To systematize this observation, we investigate the PLMs' responses to one kind of tricky questions, i.e., the false premises questions (FPQs). We annotate a FalseQA dataset containing 2365 human-written FPQs, with the corresponding explanations for the false premises and the revised true premise questions. Using FalseQA, we discover that PLMs are capable of discriminating FPQs by fine-tuning on moderate numbers (e.g., 256) of examples. PLMs also generate reasonable explanations for the false premise, which serve as rebuttals. Further replaying a few general questions during training allows PLMs to excel on FPQs and general questions simultaneously. Our work suggests that once the rebuttal ability is stimulated, knowledge inside the PLMs can be effectively utilized to handle FPQs, which incentivizes the research on PLM-based QA systems.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
06/03/2021

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

Recent work has investigated the interesting question using pre-trained ...
research
12/20/2022

(QA)^2: Question Answering with Questionable Assumptions

Naturally-occurring information-seeking questions often contain question...
research
07/18/2019

Querying Knowledge via Multi-Hop English Questions

The inherent difficulty of knowledge specification and the lack of train...
research
05/14/2021

QAConv: Question Answering on Informative Conversations

This paper introduces QAConv, a new question answering (QA) dataset that...
research
04/11/2017

Leveraging Term Banks for Answering Complex Questions: A Case for Sparse Vectors

While open-domain question answering (QA) systems have proven effective ...
research
12/20/2022

Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions

When answering natural language questions over knowledge bases (KBs), in...
research
02/24/2020

Predicting Subjective Features from Questions on QA Websites using BERT

Modern Question-Answering websites, such as StackOverflow and Quora, hav...

Please sign up or login with your details

Forgot password? Click here to reset