How Discriminative Are Your Qrels? How To Study the Statistical Significance of Document Adjudication Methods

by   David Otero, et al.

Creating test collections for offline retrieval evaluation requires human effort to judge documents' relevance. This expensive activity motivated much work in developing methods for constructing benchmarks with fewer assessment costs. In this respect, adjudication methods actively decide both which documents and the order in which experts review them, in order to better exploit the assessment budget or to lower it. Researchers evaluate the quality of those methods by measuring the correlation between the known gold ranking of systems under the full collection and the observed ranking of systems under the lower-cost one. This traditional analysis ignores whether and how the low-cost judgements impact on the statistically significant differences among systems with respect to the full collection. We fill this void by proposing a novel methodology to evaluate how the low-cost adjudication methods preserve the pairwise significant differences between systems as the full collection. In other terms, while traditional approaches look for stability in answering the question "is system A better than system B?", our proposed approach looks for stability in answering the question "is system A significantly better than system B?", which is the ultimate questions researchers need to answer to guarantee the generalisability of their results. Among other results, we found that the best methods in terms of ranking of systems correlation do not always match those preserving statistical significance.


page 1

page 2

page 3

page 4


Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering

There are many existing retrieval and question answering datasets. Howev...

Searching for a higher power in the human evaluation of MT

In MT evaluation, pairwise comparisons are conducted to identify the bet...

Document Collection Visual Question Answering

Current tasks and methods in Document Understanding aims to process docu...

Active Sampling for Large-scale Information Retrieval Evaluation

Evaluation is crucial in Information Retrieval. The development of model...

A Neural Model for Joint Document and Snippet Ranking in Question Answering for Large Document Collections

Question answering (QA) systems for large document collections typically...

Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard

Leaderboards are a ubiquitous part of modern research in applied machine...

Please sign up or login with your details

Forgot password? Click here to reset