Beyond 1-WL with Local Ego-Network Encodings

11/27/2022
by   Nurudin Alvarez-Gonzalez, et al.
0

Identifying similar network structures is key to capture graph isomorphisms and learn representations that exploit structural information encoded in graph data. This work shows that ego-networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler-Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego-networks into sparse vectors that enrich Message Passing (MP) Graph Neural Networks (GNNs) beyond 1-WL expressivity. We describe formally the relation between IGEL and 1-WL, and characterize its expressive power and limitations. Experiments show that IGEL matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on seven GNN architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2022

How Powerful are K-hop Message Passing Graph Neural Networks

The most popular design paradigm for Graph Neural Networks (GNNs) is 1-h...
research
11/09/2020

Graph Neural Network with Automorphic Equivalence Filters

Graph neural network (GNN) has recently been established as an effective...
research
10/28/2022

Generalized Laplacian Positional Encoding for Graph Representation Learning

Graph neural networks (GNNs) are the primary tool for processing graph-s...
research
06/10/2021

GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings

We present GNNAutoScale (GAS), a framework for scaling arbitrary message...
research
03/07/2023

Probing Graph Representations

Today we have a good theoretical understanding of the representational p...
research
10/05/2020

My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

Multitask Reinforcement Learning is a promising way to obtain models wit...

Please sign up or login with your details

Forgot password? Click here to reset