RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

10/12/2022
by   Honglei Zhuang, et al.
0

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT. However, there are limited studies on how to leverage more powerful sequence-to-sequence models such as T5. Existing attempts usually formulate text ranking as classification and rely on postprocessing to obtain a ranked list. In this paper, we propose RankT5 and study two T5-based ranking model structures, an encoder-decoder and an encoder-only one, so that they not only can directly output ranking scores for each query-document pair, but also can be fine-tuned with "pairwise" or "listwise" ranking losses to optimize ranking performances. Our experiments show that the proposed models with ranking losses can achieve substantial ranking performance gains on different public text ranking data sets. Moreover, when fine-tuned with listwise ranking losses, the ranking model appears to have better zero-shot ranking performance on out-of-domain data sets compared to the model fine-tuned with classification losses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Fingerprinting Fine-tuned Language Models in the Wild

There are concerns that the ability of language models (LMs) to generate...
research
08/12/2022

Joint Optimization of Ranking and Calibration with Contextualized Hybrid Model

Despite the development of ranking optimization techniques, the pointwis...
research
05/05/2019

Investigating the Successes and Failures of BERT for Passage Re-Ranking

The bidirectional encoder representations from transformers (BERT) model...
research
01/14/2021

The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models

We propose a design pattern for tackling text ranking problems, dubbed "...
research
03/14/2020

Document Ranking with a Pretrained Sequence-to-Sequence Model

This work proposes a novel adaptation of a pretrained sequence-to-sequen...
research
04/29/2021

Text-to-Text Multi-view Learning for Passage Re-ranking

Recently, much progress in natural language processing has been driven b...
research
08/29/2023

Improving Neural Ranking Models with Traditional IR Methods

Neural ranking methods based on large transformer models have recently g...

Please sign up or login with your details

Forgot password? Click here to reset