GraphiT: Encoding Graph Structure in Transformers

06/10/2021
by   Grégoire Mialon, et al.
50

We show that viewing graphs as sets of node features and incorporating structural and positional information into a transformer architecture is able to outperform representations learned with classical graph neural networks (GNNs). Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length. We thoroughly evaluate these two ideas on many classification and regression tasks, demonstrating the effectiveness of each of them independently, as well as their combination. In addition to performing well on standard benchmarks, our model also admits natural visualization mechanisms for interpreting graph motifs explaining the predictions, making it a potentially strong candidate for scientific applications where interpretation is important. Code available at https://github.com/inria-thoth/GraphiT.

READ FULL TEXT

page 10

page 18

page 19

page 20

research
04/21/2023

Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers

We introduce a novel self-attention mechanism, which we call CSA (Chroma...
research
02/07/2022

Structure-Aware Transformer for Graph Representation Learning

The Transformer architecture has gained growing attention in graph repre...
research
02/08/2023

Attending to Graph Transformers

Recently, transformer architectures for graphs emerged as an alternative...
research
06/11/2021

Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs

Graph Neural Networks (GNNs) have been widely applied to various fields ...
research
12/06/2021

Distance and Hop-wise Structures Encoding Enhanced Graph Attention Networks

Numerous works have proven that existing neighbor-averaging Graph Neural...
research
03/19/2022

PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs

Optimization of directed acyclic graph (DAG) structures has many applica...
research
02/21/2023

Generic Dependency Modeling for Multi-Party Conversation

To model the dependencies between utterances in multi-party conversation...

Please sign up or login with your details

Forgot password? Click here to reset