Significant Improvements over the State of the Art? A Case Study of the MS MARCO Document Ranking Leaderboard

02/25/2021
by   Jimmy Lin, et al.
0

Leaderboards are a ubiquitous part of modern research in applied machine learning. By design, they sort entries into some linear order, where the top-scoring entry is recognized as the "state of the art" (SOTA). Due to the rapid progress being made in information retrieval today, particularly with neural models, the top entry in a leaderboard is replaced with some regularity. These are touted as improvements in the state of the art. Such pronouncements, however, are almost never qualified with significance testing. In the context of the MS MARCO document ranking leaderboard, we pose a specific question: How do we know if a run is significantly better than the current SOTA? We ask this question against the backdrop of recent IR debates on scale types: in particular, whether commonly used significance tests are even mathematically permissible. Recognizing these potential pitfalls in evaluation methodology, our study proposes an evaluation framework that explicitly treats certain outcomes as distinct and avoids aggregating them into a single-point metric. Empirical analysis of SOTA runs from the MS MARCO document ranking leaderboard reveals insights about how one run can be "significantly better" than another that are obscured by the current official evaluation metric (MRR@100).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2020

Traditional IR rivals neural models on the MS MARCO Document Ranking Leaderboard

This short document describes a traditional IR system that achieved MRR@...
research
03/18/2019

An Updated Duet Model for Passage Re-ranking

We propose several small modifications to Duet---a deep neural ranking m...
research
05/24/2023

Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval

Common IR pipelines are typically cascade systems that may involve multi...
research
05/09/2021

MS MARCO: Benchmarking Ranking Models in the Large-Data Regime

Evaluation efforts such as TREC, CLEF, NTCIR and FIRE, alongside public ...
research
08/31/2021

Shallow pooling for sparse labels

Recent years have seen enormous gains in core IR tasks, including docume...
research
01/31/2019

On the statistical evaluation of algorithmic's computational experimentation with infeasible solutions

The experimental evaluation of algorithms results in a large set of data...
research
08/18/2023

How Discriminative Are Your Qrels? How To Study the Statistical Significance of Document Adjudication Methods

Creating test collections for offline retrieval evaluation requires huma...

Please sign up or login with your details

Forgot password? Click here to reset