Optimizing Dense Retrieval Model Training with Hard Negatives

04/16/2021
by   Jingtao Zhan, et al.
0

Ranking has always been one of the top concerns in information retrieval researches. For decades, the lexical matching signal has dominated the ad-hoc retrieval process, but solely using this signal in retrieval may cause the vocabulary mismatch problem. In recent years, with the development of representation learning techniques, many researchers turn to Dense Retrieval (DR) models for better ranking performance. Although several existing DR models have already obtained promising results, their performance improvement heavily relies on the sampling of training examples. Many effective sampling strategies are not efficient enough for practical usage, and for most of them, there still lacks theoretical analysis in how and why performance improvement happens. To shed light on these research questions, we theoretically investigate different training strategies for DR models and try to explain why hard negative sampling performs better than random sampling. Through the analysis, we also find that there are many potential risks in static hard negative sampling, which is employed by many existing training methods. Therefore, we propose two training strategies named a Stable Training Algorithm for dense Retrieval (STAR) and a query-side training Algorithm for Directly Optimizing Ranking pErformance (ADORE), respectively. STAR improves the stability of DR training process by introducing random negatives. ADORE replaces the widely-adopted static hard negative sampling method with a dynamic one to directly optimize the ranking performance. Experimental results on two publicly available retrieval benchmark datasets show that either strategy gains significant improvements over existing competitive baselines and a combination of them leads to the best performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/20/2020

Learning To Retrieve: How to Train a Dense Retrieval Model Effectively and Efficiently

Ranking has always been one of the top concerns in information retrieval...
research
08/02/2021

Jointly Optimizing Query Encoder and Product Quantization to Improve Retrieval Performance

Recently, Information Retrieval community has witnessed fast-paced advan...
research
01/02/2022

Establishing Strong Baselines for TripClick Health Retrieval

We present strong Transformer-based re-ranking and dense retrieval basel...
research
09/12/2022

Hard Negatives or False Negatives: Correcting Pooling Bias in Training Neural Ranking Models

Neural ranking models (NRMs) have become one of the most important techn...
research
04/25/2022

Evaluating Extrapolation Performance of Dense Retrieval

A retrieval model should not only interpolate the training data but also...
research
08/19/2023

Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method

Neural ranking models (NRMs) and dense retrieval (DR) models have given ...
research
04/14/2021

Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling

A vital step towards the widespread adoption of neural retrieval models ...

Please sign up or login with your details

Forgot password? Click here to reset