Hurdles to Progress in Long-form Question Answering

03/10/2021
by   Kalpesh Krishna, et al.
0

The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / test overlap, as at least 81 training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We provide suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/26/2021

New Methods Metrics for LFQA tasks

Long-form question answering (LFQA) tasks require retrieving the documen...
research
11/15/2022

Generative Long-form Question Answering: Relevance, Faithfulness and Succinctness

In this thesis, we investigated the relevance, faithfulness, and succinc...
research
05/29/2023

A Critical Evaluation of Evaluations for Long-form Question Answering

Long-form question answering (LFQA) enables answering a wide range of qu...
research
01/03/2019

Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering

End-to-end neural models have made significant progress in question answ...
research
05/19/2022

Modeling Exemplification in Long-form Question Answering via Retrieval

Exemplification is a process by which writers explain or clarify a conce...
research
04/12/2022

ASQA: Factoid Questions Meet Long-Form Answers

An abundance of datasets and availability of reliable evaluation metrics...
research
04/24/2023

Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering

Large language models (LLMs) have received significant attention by achi...

Please sign up or login with your details

Forgot password? Click here to reset