
Understanding the Message Passing in Graph Neural Networks via Power Iteration
The mechanism of message passing in graph neural networks(GNNs) is still...
read it

Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks
The pairwise interaction paradigm of graph machine learning has predomin...
read it

Reward Propagation Using Graph Convolutional Networks
Potentialbased reward shaping provides an approach for designing good r...
read it

Deep Graphs
We propose an algorithm for deep learning on networks and graphs. It rel...
read it

A PACBayesian Approach to Generalization Bounds for Graph Neural Networks
In this paper, we derive generalization bounds for the two primary class...
read it

Generalization bounds for graph convolutional neural networks via Rademacher complexity
This paper aims at studying the sample complexity of graph convolutional...
read it

Recovering a Hidden Community in a Preferential Attachment Graph
A message passing algorithm (MP) is derived for recovering a dense subgr...
read it
Let's Agree to Degree: Comparing Graph Convolutional Networks in the MessagePassing Framework
In this paper we cast neural networks defined on graphs as messagepassing neural networks (MPNNs) in order to study the distinguishing power of different classes of such models. We are interested in whether certain architectures are able to tell vertices apart based on the feature labels given as input with the graph. We consider two variants of MPNNS: anonymous MPNNs whose message functions depend only on the labels of vertices involved; and degreeaware MPNNs in which message functions can additionally use information regarding the degree of vertices. The former class covers a popular formalisms for computing functions on graphs: graph neural networks (GNN). The latter covers the socalled graph convolutional networks (GCNs), a recently introduced variant of GNNs by Kipf and Welling. We obtain lower and upper bounds on the distinguishing power of MPNNs in terms of the distinguishing power of the WeisfeilerLehman (WL) algorithm. Our results imply that (i) the distinguishing power of GCNs is bounded by the WL algorithm, but that they are one step ahead; (ii) the WL algorithm cannot be simulated by "plain vanilla" GCNs but the addition of a tradeoff parameter between features of the vertex and those of its neighbours (as proposed by Kipf and Welling themselves) resolves this problem.
READ FULL TEXT
Comments
There are no comments yet.