Prompt Generate Train (PGT): A framework for few-shot domain adaptation, alignment, and uncertainty calibration of a retriever augmented generation (RAG) model for domain speci

07/12/2023
by   C. S. Krishna, et al.
0

We present a framework - Prompt, Generate, Train (PGT) - to efficiently develop a generative question-answering model for open-book question-answering over a proprietary collection of text documents. The framework adapts a retriever augmented generation model to the target domain using supervised finetuning and reinforcement learning with synthetic feedback in a few-shot setting. This yields an aligned, uncertainty calibrated model that is competitive with GPT-4 based in-context retrieval augmented generation in generating relevant answers at lower serving costs. The synthetic generation pipeline generates high quality synthetic training data musing a medium sized LLM, Flan-T5 XXL, and a novel consistency filtering scheme. The pipeline is designed to generate both abstractive and extractive questions that span the entire corpus. Using samples from this dataset, the framework fine-tunes a smaller RAG model comprising a dense retriever and a smaller sized LLM on samples from the dataset. In parallel, the framework trains a Reward model to score domain grounded answers higher than hallucinated answers. In the next phase, the framework aligns to the RAG model with the target domain using reinforcement learning. This step improves the RAG model's ability to generate grounded answers and ignore out of domain questions. In the final phase, the framework calibrates the model uncertainty for extractive question-answers. This is a desirable feature since the model can be integrated into a cascading system where the RAG model's answer is surfaced only when the model is confident of its answer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2021

Zero-Shot Open-Book Question Answering

Open book question answering is a subset of question answering tasks whe...
research
10/05/2022

Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model

Explainable question answering systems should produce not only accurate ...
research
05/04/2017

Machine Comprehension by Text-to-Text Neural Question Generation

We propose a recurrent neural model that generates natural-language ques...
research
02/07/2017

Semi-Supervised QA with Generative Domain-Adaptive Nets

We study the problem of semi-supervised question answering----utilizing ...
research
04/20/2022

Synthetic Target Domain Supervision for Open Retrieval QA

Neural passage retrieval is a new and promising approach in open retriev...
research
05/15/2023

KEPR: Knowledge Enhancement and Plausibility Ranking for Generative Commonsense Question Answering

Generative commonsense question answering (GenCQA) is a task of automati...
research
10/07/2022

Generating Quizzes to Support Training on Quality Management and Assurance in Space Science and Engineering

Quality management and assurance is key for space agencies to guarantee ...

Please sign up or login with your details

Forgot password? Click here to reset