Challenges in Information Seeking QA:Unanswerable Questions and Paragraph Retrieval
Recent progress in pretrained language model "solved" many reading comprehension benchmark datasets. Yet information-seeking Question Answering (QA) datasets, where questions are written without the evidence document, remain unsolved. We analyze two such datasets (Natural Questions and TyDi QA) to identify remaining headrooms: paragraph selection and answerability classification, i.e. determining whether the paired evidence document contains the answer to the query or not. In other words, given a gold paragraph and knowing whether it contains an answer or not, models easily outperform a single annotator in both datasets. After identifying unanswerability as a bottleneck, we further inspect what makes questions unanswerable. Our study points to avenues for future research, both for dataset creation and model development.
READ FULL TEXT