Graph Propagation Transformer for Graph Representation Learning

05/19/2023
by   Zhe Chen, et al.
0

This paper presents a novel transformer architecture for graph representation learning. The core insight of our method is to fully consider the information propagation among nodes and edges in a graph when building the attention module in the transformer blocks. Specifically, we propose a new attention mechanism called Graph Propagation Attention (GPA). It explicitly passes the information among nodes and edges in three ways, i.e. node-to-node, node-to-edge, and edge-to-node, which is essential for learning graph-structured data. On this basis, we design an effective transformer architecture named Graph Propagation Transformer (GPTrans) to further help learn graph data. We verify the performance of GPTrans in a wide range of graph learning experiments on several benchmark datasets. These results show that our method outperforms many state-of-the-art transformer-based graph models with better performance. The code will be released at https://github.com/czczup/GPTrans.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Edgeformers: Graph-Empowered Transformers for Representation Learning on Textual-Edge Networks

Edges in many real-world social/information networks are associated with...
research
08/11/2022

Heterogeneous Line Graph Transformer for Math Word Problems

This paper describes the design and implementation of a new machine lear...
research
08/07/2021

Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs

Transformer neural networks have achieved state-of-the-art results for u...
research
05/30/2021

How Attentive are Graph Attention Networks?

Graph Attention Networks (GATs) are one of the most popular GNN architec...
research
11/26/2022

PatchGT: Transformer over Non-trainable Clusters for Learning Graph Representations

Recently the Transformer structure has shown good performances in graph ...
research
09/20/2022

Graph Reasoning Transformer for Image Parsing

Capturing the long-range dependencies has empirically proven to be effec...
research
10/15/2019

Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

We incorporate Tensor-Product Representations within the Transformer in ...

Please sign up or login with your details

Forgot password? Click here to reset