Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors
We present a dual contribution to the task of machine reading-comprehension: a technique for creating large-sized machine-comprehension (MC) datasets using paragraph-vector models; and a novel, hybrid neural-network architecture that combines the representation power of recurrent neural networks with the discriminative power of fully-connected multi-layered networks. We use the MC-dataset generation technique to build a dataset of around 2 million examples, for which we empirically determine the high-ceiling of human performance (around 91 computer models. Among all the models we have experimented with, our hybrid neural-network architecture achieves the highest performance (83.2 The remaining gap to the human-performance ceiling provides enough room for future model improvements.
READ FULL TEXT