Rewiring with Positional Encodings for Graph Neural Networks

Several recent works use positional encodings to extend the receptive fields of graph neural network (GNN) layers equipped with attention mechanisms. These techniques, however, extend receptive fields to the complete graph, at substantial computational cost and risking a change in the inductive biases of conventional GNNs, or require complex architecture adjustments. As a conservative alternative, we use positional encodings to expand receptive fields to any r-ring. Our method augments the input graph with additional nodes/edges and uses positional encodings as node and/or edge features. Thus, it is compatible with many existing GNN architectures. We also provide examples of positional encodings that are non-invasive, i.e., there is a one-to-one map between the original and the modified graphs. Our experiments demonstrate that extending receptive fields via positional encodings and a virtual fully-connected node significantly improves GNN performance and alleviates over-squashing using small r. We obtain improvements across models, showing state-of-the-art performance even using older architectures than recent Transformer models adapted to graphs.

READ FULL TEXT
research
08/28/2023

Can Transformer and GNN Help Each Other?

Although Transformer has achieved great success in natural language proc...
research
06/28/2022

Graph Condensation via Receptive Field Distribution Matching

Graph neural networks (GNNs) enable the analysis of graphs using deep le...
research
03/29/2021

RAN-GNNs: breaking the capacity limits of graph neural networks

Graph neural networks have become a staple in problems addressing learni...
research
02/03/2018

GeniePath: Graph Neural Networks with Adaptive Receptive Paths

We present, GeniePath, a scalable approach for learning adaptive recepti...
research
07/16/2022

Rewiring Networks for Graph Neural Network Training Using Discrete Geometry

Information over-squashing is a phenomenon of inefficient information pr...
research
10/20/2021

SEA: Graph Shell Attention in Graph Neural Networks

A common issue in Graph Neural Networks (GNNs) is known as over-smoothin...
research
01/01/2021

MrGCN: Mirror Graph Convolution Network for Relation Extraction with Long-Term Dependencies

The ability to capture complex linguistic structures and long-term depen...

Please sign up or login with your details

Forgot password? Click here to reset