Can Old TREC Collections Reliably Evaluate Modern Neural Retrieval Models?

01/26/2022
by   Ellen M. Voorhees, et al.
0

Neural retrieval models are generally regarded as fundamentally different from the retrieval techniques used in the late 1990's when the TREC ad hoc test collections were constructed. They thus provide the opportunity to empirically test the claim that pooling-built test collections can reliably evaluate retrieval systems that did not contribute to the construction of the collection (in other words, that such collections can be reusable). To test the reusability claim, we asked TREC assessors to judge new pools created from new search results for the TREC-8 ad hoc collection. These new search results consisted of five new runs (one each from three transformer-based models and two baseline runs that use BM25) plus the set of TREC-8 submissions that did not previously contribute to pools. The new runs did retrieve previously unseen documents, but the vast majority of those documents were not relevant. The ranking of all runs by mean evaluation score when evaluated using the official TREC-8 relevance judgment set and the newly expanded relevance set are almost identical, with Kendall's tau correlations greater than 0.99. Correlations for individual topics are also high. The TREC-8 ad hoc collection was originally constructed using deep pools over a diverse set of runs, including several effective manual runs. Its judgment budget, and hence construction cost, was relatively large. However, it does appear that the expense was well-spent: even with the advent of neural techniques, the collection has stood the test of time and remains a reliable evaluation instrument as retrieval techniques have advanced.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2022

HC4: A New Suite of Test Collections for Ad Hoc CLIR

HC4 is a new suite of test collections for ad hoc Cross-Language Informa...
research
03/06/2023

LongEval-Retrieval: French-English Dynamic Test Collection for Continuous Web Search Evaluation

LongEval-Retrieval is a Web document retrieval benchmark that focuses on...
research
04/28/2020

On the Reliability of Test Collections for Evaluating Systems of Different Types

As deep learning based models are increasingly being used for informatio...
research
01/17/2018

Efficient Test Collection Construction via Active Learning

To create a new IR test collection at minimal cost, we must carefully se...
research
11/02/2022

Relevance Assessments for Web Search Evaluation: Should We Randomise or Prioritise the Pooled Documents? (CORRECTED VERSION)

In the context of depth-k pooling for constructing web search test colle...
research
08/25/2021

Podcast Metadata and Content: Episode Relevance andAttractiveness in Ad Hoc Search

Rapidly growing online podcast archives contain diverse content on a wid...
research
12/24/2020

Understanding and Predicting the Characteristics of Test Collections

Shared-task campaigns such as NIST TREC select documents to judge by poo...

Please sign up or login with your details

Forgot password? Click here to reset