Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering

08/12/2020
by   Sebastian Hofstätter, et al.
0

There are many existing retrieval and question answering datasets. However, most of them either focus on ranked list evaluation or single-candidate question answering. This divide makes it challenging to properly evaluate approaches concerned with ranking documents and providing snippets or answers for a given query. In this work, we present FiRA: a novel dataset of Fine-Grained Relevance Annotations. We extend the ranked retrieval annotations of the Deep Learning track of TREC 2019 with passage and word level graded relevance annotations for all relevant documents. We use our newly created data to study the distribution of relevance in long documents, as well as the attention of annotators to specific positions of the text. As an example, we evaluate the recently introduced TKL document ranking model. We find that although TKL exhibits state-of-the-art retrieval results for long documents, it misses many relevant passages.

READ FULL TEXT

page 5

page 6

research
10/19/2022

Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking

Pairing a lexical retriever with a neural re-ranking model has set state...
research
06/16/2021

A Neural Model for Joint Document and Snippet Ranking in Question Answering for Large Document Collections

Question answering (QA) systems for large document collections typically...
research
05/19/2022

Modeling Exemplification in Long-form Question Answering via Retrieval

Exemplification is a process by which writers explain or clarify a conce...
research
11/05/2018

Improving Span-based Question Answering Systems with Coarsely Labeled Data

We study approaches to improve fine-grained short answer Question Answer...
research
12/01/2022

Principled Multi-Aspect Evaluation Measures of Rankings

Information Retrieval evaluation has traditionally focused on defining p...
research
08/18/2023

How Discriminative Are Your Qrels? How To Study the Statistical Significance of Document Adjudication Methods

Creating test collections for offline retrieval evaluation requires huma...
research
11/11/2019

Interactive Attention for Semantic Text Matching

Semantic text matching, which matches a target text to a source text, is...

Please sign up or login with your details

Forgot password? Click here to reset