How Discriminative Are Your Qrels? How To Study the Statistical Significance of Document Adjudication Methods

08/18/2023
by   David Otero, et al.
0

Creating test collections for offline retrieval evaluation requires human effort to judge documents' relevance. This expensive activity motivated much work in developing methods for constructing benchmarks with fewer assessment costs. In this respect, adjudication methods actively decide both which documents and the order in which experts review them, in order to better exploit the assessment budget or to lower it. Researchers evaluate the quality of those methods by measuring the correlation between the known gold ranking of systems under the full collection and the observed ranking of systems under the lower-cost one. This traditional analysis ignores whether and how the low-cost judgements impact on the statistically significant differences among systems with respect to the full collection. We fill this void by proposing a novel methodology to evaluate how the low-cost adjudication methods preserve the pairwise significant differences between systems as the full collection. In other terms, while traditional approaches look for stability in answering the question "is system A better than system B?", our proposed approach looks for stability in answering the question "is system A significantly better than system B?", which is the ultimate questions researchers need to answer to guarantee the generalisability of their results. Among other results, we found that the best methods in terms of ranking of systems correlation do not always match those preserving statistical significance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2020

Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering

There are many existing retrieval and question answering datasets. Howev...
research
10/20/2022

Searching for a higher power in the human evaluation of MT

In MT evaluation, pairwise comparisons are conducted to identify the bet...
research
04/27/2021

Document Collection Visual Question Answering

Current tasks and methods in Document Understanding aims to process docu...
research
09/06/2017

Active Sampling for Large-scale Information Retrieval Evaluation

Evaluation is crucial in Information Retrieval. The development of model...
research
11/02/2022

Relevance Assessments for Web Search Evaluation: Should We Randomise or Prioritise the Pooled Documents? (CORRECTED VERSION)

In the context of depth-k pooling for constructing web search test colle...
research
06/16/2021

A Neural Model for Joint Document and Snippet Ranking in Question Answering for Large Document Collections

Question answering (QA) systems for large document collections typically...
research
02/25/2021

Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard

Leaderboards are a ubiquitous part of modern research in applied machine...

Please sign up or login with your details

Forgot password? Click here to reset