SF-QA: Simple and Fair Evaluation Library for Open-domain Question Answering

01/06/2021
by   Xiaopeng Lu, et al.
7

Although open-domain question answering (QA) draws great attention in recent years, it requires large amounts of resources for building the full system and is often difficult to reproduce previous results due to complex configurations. In this paper, we introduce SF-QA: simple and fair evaluation framework for open-domain QA. SF-QA framework modularizes the pipeline open-domain QA system, which makes the task itself easily accessible and reproducible to research groups without enough computing resources. The proposed evaluation framework is publicly available and anyone can contribute to the code and evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2015

QANUS: An Open-source Question-Answering Platform

In this paper, we motivate the need for a publicly available, generic so...
research
05/21/2023

Evaluating Open Question Answering Evaluation

This study focuses on the evaluation of Open Question Answering (Open-QA...
research
07/20/2020

Frustratingly Hard Evidence Retrieval for QA Over Books

A lot of progress has been made to improve question answering (QA) in re...
research
09/08/2021

R2-D2: A Modular Baseline for Open-Domain Question Answering

This work presents a novel four-stage open-domain QA pipeline R2-D2 (Ran...
research
02/21/2021

Pruning the Index Contents for Memory Efficient Open-Domain QA

This work presents a novel pipeline that demonstrates what is achievable...
research
03/19/2019

Natural Language Generation at Scale: A Case Study for Open Domain Question Answering

Current approaches to Natural Language Generation (NLG) focus on domain-...
research
07/30/2020

NeuralQA: A Usable Library for Question Answering (Contextual Query Expansion + BERT) on Large Datasets

Existing tools for Question Answering (QA) have challenges that limit th...

Please sign up or login with your details

Forgot password? Click here to reset