Structure-Aware Transformer for Graph Representation Learning

02/07/2022
by   Dexiong Chen, et al.
0

The Transformer architecture has gained growing attention in graph representation learning recently, as it naturally overcomes several limitations of graph neural networks (GNNs) by avoiding their strict structural inductive biases and instead only encoding the graph structure via positional encoding. Here, we show that the node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them. To address this issue, we propose the Structure-Aware Transformer, a class of simple and flexible graph transformers built upon a new self-attention mechanism. This new self-attention incorporates structural information into the original self-attention by extracting a subgraph representation rooted at each node before computing the attention. We propose several methods for automatically generating the subgraph representation and show theoretically that the resulting representations are at least as expressive as the subgraph representations. Empirically, our method achieves state-of-the-art performance on five graph prediction benchmarks. Our structure-aware framework can leverage any existing GNN to extract the subgraph representation, and we show that it systematically improves performance relative to the base GNN model, successfully combining the advantages of GNNs and transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2022

Graph Self-Attention for learning graph representation with Transformer

We propose a novel Graph Self-Attention module to enable Transformer mod...
research
04/21/2023

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

We introduce a novel self-attention mechanism, which we call CSA (Chroma...
research
06/09/2021

Do Transformers Really Perform Bad for Graph Representation?

The Transformer architecture has become a dominant choice in many domain...
research
01/23/2022

Investigating Expressiveness of Transformer in Spectral Domain for Graphs

Transformers have been proven to be inadequate for graph representation ...
research
06/10/2021

GraphiT: Encoding Graph Structure in Transformers

We show that viewing graphs as sets of node features and incorporating s...
research
11/15/2022

Adaptive Multi-Neighborhood Attention based Transformer for Graph Representation Learning

By incorporating the graph structural information into Transformers, gra...
research
08/07/2021

Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs

Transformer neural networks have achieved state-of-the-art results for u...

Please sign up or login with your details

Forgot password? Click here to reset