Answer-based Adversarial Training for Generating Clarification Questions

04/04/2019
by   Sudha Rao, et al.
0

We present an approach for generating clarification questions with the goal of eliciting new information that would make the given textual context more complete. We propose that modeling hypothetical answers (to clarification questions) as latent variables can guide our approach into generating more useful clarification questions. We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question. We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

We train a language model (LM) to robustly answer multistep questions by...
research
05/21/2021

GSSF: A Generative Sequence Similarity Function based on a Seq2Seq model for clustering online handwritten mathematical answers

Toward a computer-assisted marking for descriptive math questions,this p...
research
11/21/2018

Overcoming low-utility facets for complex answer retrieval

Many questions cannot be answered simply; their answers must include num...
research
05/26/2020

Generating Semantically Valid Adversarial Questions for TableQA

Adversarial attack on question answering systems over tabular data (Tabl...
research
06/12/2017

Adversarial Feature Matching for Text Generation

The Generative Adversarial Network (GAN) has achieved great success in g...
research
09/29/2020

Sequence-to-Sequence Learning for Indonesian Automatic Question Generator

Automatic question generation is defined as the task of automating the c...
research
09/08/2021

TruthfulQA: Measuring How Models Mimic Human Falsehoods

We propose a benchmark to measure whether a language model is truthful i...

Please sign up or login with your details

Forgot password? Click here to reset