NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned

by   Sewon Min, et al.

We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing large, redundant, retrieval corpora or the parameters of large learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.


Pirá: A Bilingual Portuguese-English Dataset for Question-Answering about the Ocean

Current research in natural language processing is highly dependent on c...

Improving the Question Answering Quality using Answer Candidate Filtering based on Natural-Language Features

Software with natural-language user interfaces has an ever-increasing im...

NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands

The NLC2CMD Competition hosted at NeurIPS 2020 aimed to bring the power ...

A Quantitative Evaluation of Natural Language Question Interpretation for Question Answering Systems

Systematic benchmark evaluation plays an important role in the process o...

Recent Advances in Automated Question Answering In Biomedical Domain

The objective of automated Question Answering (QA) systems is to provide...

Relevance-guided Supervision for OpenQA with ColBERT

Systems for Open-Domain Question Answering (OpenQA) generally depend on ...

CHIME: Cross-passage Hierarchical Memory Network for Generative Review Question Answering

We introduce CHIME, a cross-passage hierarchical memory network for ques...