Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning

05/02/2022
by   Fangzhi Xu, et al.
1

Machine reading comprehension has aroused wide concerns, since it explores the potential of model for text understanding. To further equip the machine with the reasoning capability, the challenging task of logical reasoning is proposed. Previous works on logical reasoning have proposed some strategies to extract the logical units from different aspects. However, there still remains a challenge to model the long distance dependency among the logical units. Also, it is demanding to uncover the logical structures of the text and further fuse the discrete logic to the continuous text embedding. To tackle the above issues, we propose an end-to-end model Logiformer which utilizes a two-branch graph transformer network for logical reasoning of text. Firstly, we introduce different extraction strategies to split the text into two sets of logical units, and construct the logical graph and the syntax graph respectively. The logical graph models the causal relations for the logical branch while the syntax graph captures the co-occurrence relations for the syntax branch. Secondly, to model the long distance dependency, the node sequence from each graph is fed into the fully connected graph transformer structures. The two adjacent matrices are viewed as the attention biases for the graph transformer layers, which map the discrete logical structures to the continuous text embedding space. Thirdly, a dynamic gate mechanism and a question-aware self-attention module are introduced before the answer prediction to update the features. The reasoning process provides the interpretability by employing the logical units, which are consistent with human cognition. The experimental results show the superiority of our model, which outperforms the state-of-the-art single model on two logical reasoning benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension

Recent machine reading comprehension datasets such as ReClor and LogiQA ...
research
06/21/2023

Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension

Machine reading comprehension (MRC) poses new challenges over logical re...
research
11/20/2020

LAGNet: Logic-Aware Graph Network for Human Interaction Understanding

Compared with the progress made on human activity classification, much l...
research
07/04/2022

Discourse-Aware Graph Networks for Textual Logical Reasoning

Textual logical reasoning, especially question answering (QA) tasks with...
research
04/03/2023

Polytuplet Loss: A Reverse Approach to Training Reading Comprehension and Logical Reasoning Models

Throughout schooling, students are tested on reading comprehension and l...
research
07/19/2023

Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model

Despite the significant progress made by transformer models in machine r...
research
02/23/2023

SPINDLE: Spinning Raw Text into Lambda Terms with Graph Attention

This paper describes SPINDLE - an open source Python module implementing...

Please sign up or login with your details

Forgot password? Click here to reset