Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model

07/19/2023
by   Shima Foolad, et al.
0

Despite the significant progress made by transformer models in machine reading comprehension tasks, they still fall short in handling complex reasoning tasks due to the absence of explicit knowledge in the input sequence. To address this limitation, many recent works have proposed injecting external knowledge into the model. However, selecting relevant external knowledge, ensuring its availability, and requiring additional processing steps remain challenging. In this paper, we introduce a novel attention pattern that integrates reasoning knowledge derived from a heterogeneous graph into the transformer architecture without relying on external knowledge. The proposed attention pattern comprises three key elements: global-local attention for word tokens, graph attention for entity tokens that exhibit strong attention towards tokens connected in the graph as opposed to those unconnected, and the consideration of the type of relationship between each entity token and word token. This results in optimized attention between the two if a relationship exists. The pattern is coupled with special relative position labels, allowing it to integrate with LUKE's entity-aware self-attention mechanism. The experimental findings corroborate that our model outperforms both the cutting-edge LUKE-Graph and the baseline LUKE model on the ReCoRD dataset that focuses on commonsense reasoning.

READ FULL TEXT

page 3

page 5

page 7

research
03/12/2023

LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension

Incorporating prior knowledge can improve existing pre-training models i...
research
05/21/2018

Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge

We introduce a neural reading comprehension model that integrates extern...
research
12/06/2021

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Most of today's AI systems focus on using self-attention mechanisms and ...
research
11/01/2018

Hybrid Self-Attention Network for Machine Translation

The encoder-decoder is the typical framework for Neural Machine Translat...
research
03/26/2021

Incorporating Connections Beyond Knowledge Embeddings: A Plug-and-Play Module to Enhance Commonsense Reasoning in Machine Reading Comprehension

Conventional Machine Reading Comprehension (MRC) has been well-addressed...
research
08/24/2019

Enhancing Neural Sequence Labeling with Position-Aware Self-Attention

Sequence labeling is a fundamental task in natural language processing a...
research
05/02/2022

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

Machine reading comprehension has aroused wide concerns, since it explor...

Please sign up or login with your details

Forgot password? Click here to reset