
How Powerful are Graph Neural Networks?
Graph Neural Networks (GNNs) for representation learning of graphs broad...
read it

Revisiting graph neural networks and distance encoding in a practical view
Graph neural networks (GNNs) are widely used in the applications based o...
read it

Positionaware Graph Neural Networks
Learning node embeddings that capture a node's position within the broad...
read it

The Surprising Power of Graph Neural Networks with Random Node Initialization
Graph neural networks (GNNs) are effective models for representation lea...
read it

GraLSP: Graph Neural Networks with Local Structural Patterns
It is not until recently that graph neural networks (GNNs) are adopted t...
read it

Pointer Graph Networks
Graph neural networks (GNNs) are typically applied to static graphs that...
read it

Egobased Entropy Measures for Structural Representations on Graphs
Machine learning on graphstructured data has attracted high research in...
read it
Distance Encoding – Design Provably More Powerful Graph Neural Networks for Structural Representation Learning
Learning structural representations of node sets from graphstructured data is crucial for applications ranging from noderole discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in structural representation learning. However, most GNNs are limited by the 1WeisfeilerLehman (WL) test and thus possible to generate identical representation for structures and graphs that are actually different. More powerful GNNs, proposed recently by mimicking higherorderWL tests, only focus on entiregraph representations and cannot utilize sparsity of the graph structure to be computationally efficient. Here we propose a general class of structurerelated features, termed Distance Encoding (DE), to assist GNNs in representing node sets with arbitrary sizes with strictly more expressive power than the 1WL test. DE essentially captures the distance between the node set whose representation is to be learnt and each node in the graph, which includes important graphrelated measures such as shortestpathdistance and generalized PageRank scores. We propose two general frameworks for GNNs to use DEs (1) as extra node attributes and (2) further as controllers of message aggregation in GNNs. Both frameworks may still utilize the sparse structure to keep scalability to process large graphs. In theory, we prove that these two frameworks can distinguish node sets embedded in almost all regular graphs where traditional GNNs always fail. We also rigorously analyze their limitations. Empirically, we evaluate these two frameworks on node structural roles prediction, link prediction and triangle prediction over six real networks. The results show that our models outperform GNNs without DEs by upto 15 outperform other SOTA baselines particularly designed for those tasks.
READ FULL TEXT
Comments
There are no comments yet.