Question Generation by Transformers

09/09/2019
by   Kettip Kriangchaivech, et al.
1

A machine learning model was developed to automatically generate questions from Wikipedia passages using transformers, an attention-based model eschewing the paradigm of existing recurrent neural networks (RNNs). The model was trained on the inverted Stanford Question Answering Dataset (SQuAD), which is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles. After training, the question generation model is able to generate simple questions relevant to unseen passages and answers containing an average of 8 words per question. The word error rate (WER) was used as a metric to compare the similarity between SQuAD questions and the model-generated questions. Although the high average WER suggests that the questions generated differ from the original SQuAD questions, the questions generated are mostly grammatically correct and plausible in their own right.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2016

SQuAD: 100,000+ Questions for Machine Comprehension of Text

We present the Stanford Question Answering Dataset (SQuAD), a new readin...
research
02/14/2020

FQuAD: French Question Answering Dataset

Recent advances in the field of language modeling have improved state-of...
research
03/31/2017

Reading Wikipedia to Answer Open-Domain Questions

This paper proposes to tackle open- domain question answering using Wiki...
research
09/27/2021

FQuAD2.0: French Question Answering and knowing that you know nothing

Question Answering, including Reading Comprehension, is one of the NLP r...
research
01/21/2021

Templates of generic geographic information for answering where-questions

In everyday communication, where-questions are answered by place descrip...
research
05/05/2015

A Feature-based Classification Technique for Answering Multi-choice World History Questions

Our FRDC_QA team participated in the QA-Lab English subtask of the NTCIR...
research
03/30/2018

The Training of Neuromodels for Machine Comprehension of Text. Brain2Text Algorithm

Nowadays, the Internet represents a vast informational space, growing ex...

Please sign up or login with your details

Forgot password? Click here to reset