RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering

10/25/2022
by   Victor Zhong, et al.
0

We introduce RoMQA, the first benchmark for robust, multi-evidence, multi-answer question answering (QA). RoMQA contains clusters of questions that are derived from related constraints mined from the Wikidata knowledge graph. RoMQA evaluates robustness of QA models to varying constraints by measuring worst-case performance within each question cluster. Compared to prior QA datasets, RoMQA has more human-written questions that require reasoning over more evidence text and have, on average, many more correct answers. In addition, human annotators rate RoMQA questions as more natural or likely to be asked by people. We evaluate state-of-the-art large language models in zero-shot, few-shot, and fine-tuning settings, and find that RoMQA is challenging: zero-shot and few-shot models perform similarly to naive baselines, while supervised retrieval methods perform well below gold evidence upper bounds. Moreover, existing models are not robust to variations in question constraints, but can be made more robust by tuning on clusters of related questions. Our results show that RoMQA is a challenging benchmark for large language models, and provides a quantifiable test to build more robust QA methods.

READ FULL TEXT

page 2

page 4

page 11

page 12

research
06/08/2021

Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering

Disfluencies is an under-studied topic in NLP, even though it is ubiquit...
research
04/10/2021

Meta-tuning Language Models to Answer Prompts Better

Large pretrained language models like GPT-3 have acquired a surprising a...
research
07/25/2023

GPT-3 Models are Few-Shot Financial Reasoners

Financial analysis is an important tool for evaluating company performan...
research
06/07/2023

Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data Augmentation

Few-shot question answering (QA) aims at precisely discovering answers t...
research
05/11/2023

Evaluating Open-Domain Question Answering in the Era of Large Language Models

Lexical matching remains the de facto evaluation method for open-domain ...
research
06/15/2021

Question Answering Infused Pre-training of General-Purpose Contextualized Representations

This paper proposes a pre-training objective based on question answering...
research
05/31/2023

Using Visual Cropping to Enhance Fine-Detail Question Answering of BLIP-Family Models

Visual Question Answering is a challenging task, as it requires seamless...

Please sign up or login with your details

Forgot password? Click here to reset