Closed-book Question Generation via Contrastive Learning

10/13/2022
by   Xiangjue Dong, et al.
0

Question Generation (QG) is a fundamental NLP task for many downstream applications. Recent studies on open-book QG, where supportive question-context pairs are provided to models, have achieved promising progress. However, generating natural questions under a more practical closed-book setting that lacks these supporting documents still remains a challenge. In this work, to learn better representations from semantic information hidden in question-answer pairs under the closed-book setting, we propose a new QG model empowered by a contrastive learning module and an answer reconstruction module. We present a new closed-book QA dataset – WikiCQA involving abstractive long answers collected from a wiki-style website. In the experiments, we validate the proposed QG model on both public datasets and the new WikiCQA dataset. Empirical results show that the proposed QG model outperforms baselines in both automatic evaluation and human evaluation. In addition, we show how to leverage the proposed model to improve existing closed-book QA systems. We observe that by pre-training a closed-book QA model on our generated synthetic QA pairs, significant QA improvement can be achieved on both seen and unseen datasets, which further demonstrates the effectiveness of our QG model for enhancing unsupervised and semi-supervised QA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

Recent work has investigated the interesting question using pre-trained ...
research
09/08/2021

It is AI's Turn to Ask Human a Question: Question and Answer Pair Generation for Children Storybooks in FairytaleQA Dataset

Existing question answering (QA) datasets are created mainly for the app...
research
10/12/2022

Context Generation Improves Open Domain Question Answering

Closed-book question answering (QA) requires a model to directly answer ...
research
10/04/2021

Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA

Our goal is to deliver a new task and leaderboard to stimulate research ...
research
06/21/2021

Learning to Rank Question Answer Pairs with Bilateral Contrastive Data Augmentation

In this work, we propose a novel and easy-to-apply data augmentation str...
research
05/18/2023

Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs

We introduce two novel methods, Tree-Search and Self-contextualizing QA,...
research
09/08/2018

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

We present a new kind of question answering dataset, OpenBookQA, modeled...

Please sign up or login with your details

Forgot password? Click here to reset