Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

12/16/2021
by   Akari Asai, et al.
8

Retrieval-augmented generation models have shown state-of-the-art performance across many knowledge-intensive NLP tasks such as open question answering and fact verification. These models are trained to generate the final output given the retrieved passages, which can be irrelevant to the original query, leading to learning spurious cues or answer memorization. This work introduces a method to incorporate evidentiality of passages – whether a passage contains correct evidence to support the output – into training the generator. We introduce a multi-task learning framework to jointly generate the final output and predict the evidentiality of each passage, leveraging a new task-agnostic method to obtain silver evidentiality labels for supervision. Our experiments on five datasets across three knowledge-intensive tasks show that our new evidentiality-guided generator significantly outperforms its direct counterpart with the same-size model and advances the state of the art on FaVIQ-Ambig. We attribute these improvements to both the auxiliary multi-task learning and silver evidentiality mining techniques.

READ FULL TEXT

page 1

page 16

research
10/04/2022

Recitation-Augmented Language Models

We propose a new paradigm to help Large Language Models (LLMs) generate ...
research
06/24/2021

VOGUE: Answer Verbalization through Multi-Task Learning

In recent years, there have been significant developments in Question An...
research
07/07/2022

Multi-Task Retrieval-Augmented Text Generation with Relevance Sampling

This paper studies multi-task training of retrieval-augmented generation...
research
11/02/2022

Passage-Mask: A Learnable Regularization Strategy for Retriever-Reader Models

Retriever-reader models achieve competitive performance across many diff...
research
11/11/2018

ReDecode Framework for Iterative Improvement in Paraphrase Generation

Generating paraphrases, that is, different variations of a sentence conv...
research
02/27/2017

Identifying beneficial task relations for multi-task learning in deep neural networks

Multi-task learning (MTL) in deep neural networks for NLP has recently r...
research
05/28/2019

DSReg: Using Distant Supervision as a Regularizer

In this paper, we aim at tackling a general issue in NLP tasks where som...

Please sign up or login with your details

Forgot password? Click here to reset