-
Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Natural Questions is a new challenging machine reading comprehension ben...
read it
-
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
Machine reading comprehension with unanswerable questions aims to abstai...
read it
-
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering
This paper describes a novel hierarchical attention network for reading ...
read it
-
Adaptive Bi-directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension
Recently, the attention-enhanced multi-layer encoder, such as Transforme...
read it
-
A BERT Baseline for the Natural Questions
This technical note describes a new baseline for the Natural Questions. ...
read it
-
Multi-span Style Extraction for Generative Reading Comprehension
Generative machine reading comprehension (MRC) requires a model to gener...
read it
-
Weighted Global Normalization for Multiple Choice ReadingComprehension over Long Documents
Motivated by recent evidence pointing out the fragility of high-performi...
read it
No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension
The Natural Questions (NQ) benchmark set brings new challenges to Machine Reading Comprehension: the answers are not only at different levels of granularity (long and short), but also of richer types (including no-answer, yes/no, single-span and multi-span). In this paper, we target at this challenge and handle all answer types systematically. In particular, we propose a novel approach called Reflection Net which leverages a two-step training procedure to identify the no-answer and wrong-answer cases. Extensive experiments are conducted to verify the effectiveness of our approach. At the time of paper writing (May. 20, 2020), our approach achieved the top 1 on both long and short answer leaderboard, with F1 scores of 77.2 and 64.1, respectively.
READ FULL TEXT
Comments
There are no comments yet.