Adaptive Bi-directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension

12/20/2020
by   Nuo Chen, et al.
0

Recently, the attention-enhanced multi-layer encoder, such as Transformer, has been extensively studied in Machine Reading Comprehension (MRC). To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i.e., passage and question. Previous studies have shown that the representation of source sequence becomes more coarse-grained from fine-grained as the encoding layer increases. It is generally believed that with the growing number of layers in deep neural networks, the encoding process will gather relevant information for each location increasingly, resulting in more coarse-grained representations, which adds the likelihood of similarity to other locations (referring to homogeneity). Such a phenomenon will mislead the model to make wrong judgments so as to degrade the performance. To this end, we propose a novel approach called Adaptive Bidirectional Attention, which adaptively exploits the source representations of different levels to the predictor. Experimental results on the benchmark dataset, SQuAD 2.0 demonstrate the effectiveness of our approach, and the results are better than the previous state-of-the-art model by 2.5% EM and 2.3% F1 scores.

READ FULL TEXT
research
11/17/2022

Feature-augmented Machine Reading Comprehension with Auxiliary Tasks

While most successful approaches for machine reading comprehension rely ...
research
05/12/2020

Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

Natural Questions is a new challenging machine reading comprehension ben...
research
11/29/2018

Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering

This paper describes a novel hierarchical attention network for reading ...
research
09/25/2020

No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

The Natural Questions (NQ) benchmark set brings new challenges to Machin...
research
05/25/2022

You Need to Read Again: Multi-granularity Perception Network for Moment Retrieval in Videos

Moment retrieval in videos is a challenging task that aims to retrieve t...
research
05/25/2021

NEUer at SemEval-2021 Task 4: Complete Summary Representation by Filling Answers into Question for Matching Reading Comprehension

SemEval task 4 aims to find a proper option from multiple candidates to ...
research
06/09/2023

Reconstructing Human Expressiveness in Piano Performances with a Transformer Network

Capturing intricate and subtle variations in human expressiveness in mus...

Please sign up or login with your details

Forgot password? Click here to reset