Contrastive Fine-tuning Improves Robustness for Neural Rankers

05/27/2021
by   Xiaofei Ma, et al.
0

The performance of state-of-the-art neural rankers can deteriorate substantially when exposed to noisy inputs or applied to a new domain. In this paper, we present a novel method for fine-tuning neural rankers that can significantly improve their robustness to out-of-domain data and query perturbations. Specifically, a contrastive loss that compares data points in the representation space is combined with the standard ranking loss during fine-tuning. We use relevance labels to denote similar/dissimilar pairs, which allows the model to learn the underlying matching semantics across different query-document pairs and leads to improved robustness. In experiments with four passage ranking datasets, the proposed contrastive fine-tuning method obtains improvements on robustness to query reformulations, noise perturbations, and zero-shot transfer for both BERT and BART based rankers. Additionally, our experiments show that contrastive fine-tuning outperforms data augmentation for robustifying neural rankers.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/04/2021

Robust fine-tuning of zero-shot models

Large pre-trained models such as CLIP offer consistent accuracy across a...
07/21/2021

Improved Text Classification via Contrastive Adversarial Training

We propose a simple and general method to regularize the fine-tuning of ...
08/20/2021

Contrastive Representations for Label Noise Require Fine-Tuning

In this paper we show that the combination of a Contrastive representati...
10/28/2021

RadBERT-CL: Factually-Aware Contrastive Learning For Radiology Report Classification

Radiology reports are unstructured and contain the imaging findings and ...
04/19/2021

A Framework using Contrastive Learning for Classification with Noisy Labels

We propose a framework using contrastive learning as a pre-training task...
10/28/2021

Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning

A BERT-based Neural Ranking Model (NRM) can be either a cross-encoder or...
08/09/2019

The role of cue enhancement and frequency fine-tuning in hearing impaired phone recognition

A speech-based hearing test is designed to identify the susceptible erro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.