Passage-Mask: A Learnable Regularization Strategy for Retriever-Reader Models

11/02/2022
by   Shujian Zhang, et al.
0

Retriever-reader models achieve competitive performance across many different NLP tasks such as open question answering and dialogue conversations. In this work, we notice these models easily overfit the top-rank retrieval passages and standard training fails to reason over the entire retrieval passages. We introduce a learnable passage mask mechanism which desensitizes the impact from the top-rank retrieval passages and prevents the model from overfitting. Controlling the gradient variance with fewer mask candidates and selecting the mask candidates with one-shot bi-level optimization, our learnable regularization strategy enforces the answer generation to focus on the entire retrieval passages. Experiments on different tasks across open question answering, dialogue conversation, and fact verification show that our method consistently outperforms its baselines. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2020

Open-Retrieval Conversational Question Answering

Conversational search is one of the ultimate goals of information retrie...
research
12/16/2021

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

Retrieval-augmented generation models have shown state-of-the-art perfor...
research
01/06/2018

Analysis of Wikipedia-based Corpora for Question Answering

This paper gives comprehensive analyses of corpora based on Wikipedia fo...
research
01/24/2022

Artefact Retrieval: Overview of NLP Models with Knowledge Base Access

Many NLP models gain performance by having access to a knowledge base. A...
research
09/15/2023

SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering

Modern open-domain question answering systems often rely on accurate and...
research
10/30/2022

An Efficient Memory-Augmented Transformer for Knowledge-Intensive NLP Tasks

Access to external knowledge is essential for many natural language proc...
research
05/15/2023

It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance

Progress in NLP is increasingly measured through benchmarks; hence, cont...

Please sign up or login with your details

Forgot password? Click here to reset