P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

05/04/2022
by   Xiaomeng Hu, et al.
0

Compared to other language tasks, applying pre-trained language models (PLMs) for search ranking often requires more nuances and training signals. In this paper, we identify and study the two mismatches between pre-training and ranking fine-tuning: the training schema gap regarding the differences in training objectives and model architectures, and the task knowledge gap considering the discrepancy between the knowledge needed in ranking and that learned during pre-training. To mitigate these gaps, we propose Pre-trained, Prompt-learned and Pre-finetuned Neural Ranker (P^3 Ranker). P^3 Ranker leverages prompt-based learning to convert the ranking task into a pre-training like schema and uses pre-finetuning to initialize the model on intermediate supervised tasks. Experiments on MS MARCO and Robust04 show the superior performances of P^3 Ranker in few-shot ranking. Analyses reveal that P^3 Ranker is able to better accustom to the ranking task through prompt-based learning and retrieve necessary ranking-oriented knowledge gleaned in pre-finetuning, resulting in data-efficient PLM adaptation. Our code is available at https://github.com/NEUIR/P3Ranker.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2022

Exploring the Universal Vulnerability of Prompt-based Learning Paradigm

Prompt-based learning paradigm bridges the gap between pre-training and ...
research
04/06/2022

Knowledge Infused Decoding

Pre-trained language models (LMs) have been shown to memorize a substant...
research
05/15/2022

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

It is challenging to train a good intent classifier for a task-oriented ...
research
10/18/2022

Alibaba-Translate China's Submission for WMT 2022 Quality Estimation Shared Task

In this paper, we present our submission to the sentence-level MQM bench...
research
10/18/2022

Alibaba-Translate China's Submission for WMT 2022 Metrics Shared Task

In this report, we present our submission to the WMT 2022 Metrics Shared...
research
08/18/2022

Ered: Enhanced Text Representations with Entities and Descriptions

External knowledge,e.g., entities and entity descriptions, can help huma...
research
10/19/2022

Schema-aware Reference as Prompt Improves Data-Efficient Relational Triple and Event Extraction

Information Extraction, which aims to extract structural relational trip...

Please sign up or login with your details

Forgot password? Click here to reset