Longformer for MS MARCO Document Re-ranking Task

09/20/2020
by   Ivan Sekulić, et al.
0

Two step document ranking, where the initial retrieval is done by a classical information retrieval method, followed by neural re-ranking model, is the new standard. The best performance is achieved by using transformer-based models as re-rankers, e.g., BERT. We employ Longformer, a BERT-like model for long documents, on the MS MARCO document re-ranking task. The complete code used for training the model can be found on: https://github.com/isekulic/longformer-marco

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2020

Traditional IR rivals neural models on the MS MARCO Document Ranking Leaderboard

This short document describes a traditional IR system that achieved MRR@...
research
04/17/2020

Learning-to-Rank with BERT in TF-Ranking

This paper describes a machine learning algorithm for document (re)ranki...
research
09/16/2020

Simplified TinyBERT: Knowledge Distillation for Document Retrieval

Despite the effectiveness of utilizing BERT for document ranking, the co...
research
05/23/2023

DAPR: A Benchmark on Document-Aware Passage Retrieval

Recent neural retrieval mainly focuses on ranking short texts and is cha...
research
05/24/2023

Fusion-in-T5: Unifying Document Ranking Signals for Improved Information Retrieval

Common IR pipelines are typically cascade systems that may involve multi...
research
02/01/2022

Improving BERT-based Query-by-Document Retrieval with Multi-Task Optimization

Query-by-document (QBD) retrieval is an Information Retrieval task in wh...
research
07/04/2022

Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding

We carry out a comprehensive evaluation of 13 recent models for ranking ...

Please sign up or login with your details

Forgot password? Click here to reset