Studying Catastrophic Forgetting in Neural Ranking Models

01/18/2021
by   Jesus Lovon Melgarejo, et al.
0

Several deep neural ranking models have been proposed in the recent IR literature. While their transferability to one target domain held by a dataset has been widely addressed using traditional domain adaptation strategies, the question of their cross-domain transferability is still under-studied. We study here in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge, leading to performance decrease on those domains. Our experiments show that the effectiveness of neuralIR ranking models is achieved at the cost of catastrophic forgetting and that a lifelong learning strategy using a cross-domain regularizer success-fully mitigates the problem. Using an explanatory approach built on a regression model, we also show the effect of domain characteristics on the rise of catastrophic forgetting. We believe that the obtained results can be useful for both theoretical and practical future work in neural IR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/09/2018

Cross Domain Regularization for Neural Ranking Models Using Adversarial Learning

Unlike traditional learning to rank models that depend on hand-crafted f...
research
04/19/2021

DANICE: Domain adaptation without forgetting in neural image compression

Neural image compression (NIC) is a new coding paradigm where coding cap...
research
11/12/2022

LLEDA – Lifelong Self-Supervised Domain Adaptation

Lifelong domain adaptation remains a challenging task in machine learnin...
research
12/08/2020

Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

Most standard learning approaches lead to fragile models which are prone...
research
01/15/2022

Transferability in Deep Learning: A Survey

The success of deep learning algorithms generally depends on large-scale...
research
04/12/2018

Combating catastrophic forgetting with developmental compression

Generally intelligent agents exhibit successful behavior across problems...
research
02/25/2023

Prompt-based Learning for Text Readability Assessment

We propose the novel adaptation of a pre-trained seq2seq model for reada...

Please sign up or login with your details

Forgot password? Click here to reset