Continual Learning of Long Topic Sequences in Neural Information Retrieval

01/10/2022
by   Thomas Gerald, et al.
0

In information retrieval (IR) systems, trends and users' interests may change over time, altering either the distribution of requests or contents to be recommended. Since neural ranking approaches heavily depend on the training data, it is crucial to understand the transfer capacity of recent IR approaches to address new domains in the long term. In this paper, we first propose a dataset based upon the MSMarco corpus aiming at modeling a long stream of topics as well as IR property-driven controlled settings. We then in-depth analyze the ability of recent neural IR models while continually learning those streams. Our empirical study highlights in which particular cases catastrophic forgetting occurs (e.g., level of similarity between tasks, peculiarities on text length, and ways of learning models) to provide future directions in terms of model design.

READ FULL TEXT
research
08/16/2023

Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation

Continual learning refers to the capability of a machine learning model ...
research
07/13/2017

Neural Networks for Information Retrieval

Machine learning plays a role in many aspects of modern IR systems, and ...
research
07/08/2020

A Survey of Quantum Theory Inspired Approaches to Information Retrieval

Since 2004, researchers have been using the mathematical framework of Qu...
research
11/27/2021

Pre-training Methods in Information Retrieval

The core of information retrieval (IR) is to identify relevant informati...
research
08/11/2021

Are Neural Ranking Models Robust?

Recently, we have witnessed the bloom of neural ranking models in the in...
research
03/30/2019

On the Estimation and Use of Statistical Modelling in Information Retrieval

Several tasks in information retrieval (IR) rely on assumptions regardin...
research
04/24/2022

Entity-Conditioned Question Generation for Robust Attention Distribution in Neural Information Retrieval

We show that supervised neural information retrieval (IR) models are pro...

Please sign up or login with your details

Forgot password? Click here to reset