P-reg
Rethinking Graph Regularization for Graph Neural Networks.
view repo
The graph Laplacian regularization term is usually used in semi-supervised node classification to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization f(X)^⊤Δ f(X) brings little-to-no benefit to existing GNNs, we propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. The code is available at https://github.com/yang-han/P-reg.
READ FULL TEXT
We propose a unified framework for adaptive connection sampling in graph...
read it
We investigate the representation power of graph neural networks in the
...
read it
Graph Neural Networks (GNNs) are the predominant technique for learning ...
read it
We show that viewing graphs as sets of node features and incorporating
s...
read it
Generating computational anatomical models of cerebrovascular networks i...
read it
Graph Neural Networks (GNNs) are effective in many applications. Still, ...
read it
It is known that the current graph neural networks (GNNs) are difficult ...
read it
Rethinking Graph Regularization for Graph Neural Networks.
Comments
There are no comments yet.