New Methods Metrics for LFQA tasks

12/26/2021
by   Suchismit Mahapatra, et al.
0

Long-form question answering (LFQA) tasks require retrieving the documents pertinent to a query, using them to form a paragraph-length answer. Despite considerable progress in LFQA modeling, fundamental issues impede its progress: i) train/validation/test dataset overlap, ii) absence of automatic metrics and iii) generated answers not being "grounded" in retrieved documents. This work addresses every one these critical bottlenecks, contributing natural language inference/generation (NLI/NLG) methods and metrics that make significant strides to their alleviation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2021

Hurdles to Progress in Long-form Question Answering

The task of long-form question answering (LFQA) involves retrieving docu...
research
11/15/2022

Generative Long-form Question Answering: Relevance, Faithfulness and Succinctness

In this thesis, we investigated the relevance, faithfulness, and succinc...
research
05/29/2023

A Critical Evaluation of Evaluations for Long-form Question Answering

Long-form question answering (LFQA) enables answering a wide range of qu...
research
04/12/2022

ASQA: Factoid Questions Meet Long-Form Answers

An abundance of datasets and availability of reliable evaluation metrics...
research
10/31/2022

Query Refinement Prompts for Closed-Book Long-Form Question Answering

Large language models (LLMs) have been shown to perform well in answerin...
research
05/24/2021

VANiLLa : Verbalized Answers in Natural Language at Large Scale

In the last years, there have been significant developments in the area ...
research
10/02/2020

AI pptX: Robust Continuous Learning for Document Generation with AI Insights

Business analysts create billions of slide decks, reports and documents ...

Please sign up or login with your details

Forgot password? Click here to reset