-
Message Passing Attention Networks for Document Understanding
Most graph neural networks can be described in terms of message passing,...
read it
-
Graph Joint Attention Networks
Graph attention networks (GATs) have been recognized as powerful tools f...
read it
-
Graph Networks with Spectral Message Passing
Graph Neural Networks (GNNs) are the subject of intense focus by the mac...
read it
-
PushNet: Efficient and Adaptive Neural Message Passing
Message passing neural networks have recently evolved into a state-of-th...
read it
-
Graph Convolutional Neural Networks with Node Transition Probability-based Message Passing and DropNode Regularization
Graph convolutional neural networks (GCNNs) have received much attention...
read it
-
Adaptive Edge Features Guided Graph Attention Networks
Edge features contain important information about graphs. However, curre...
read it
-
Relation-aware Graph Attention Model With Adaptive Self-adversarial Training
This paper describes an end-to-end solution for the relationship predict...
read it
Pathfinder Discovery Networks for Neural Message Passing
In this work we propose Pathfinder Discovery Networks (PDNs), a method for jointly learning a message passing graph over a multiplex network with a downstream semi-supervised model. PDNs inductively learn an aggregated weight for each edge, optimized to produce the best outcome for the downstream learning task. PDNs are a generalization of attention mechanisms on graphs which allow flexible construction of similarity functions between nodes, edge convolutions, and cheap multiscale mixing layers. We show that PDNs overcome weaknesses of existing methods for graph attention (e.g. Graph Attention Networks), such as the diminishing weight problem. Our experimental results demonstrate competitive predictive performance on academic node classification tasks. Additional results from a challenging suite of node classification experiments show how PDNs can learn a wider class of functions than existing baselines. We analyze the relative computational complexity of PDNs, and show that PDN runtime is not considerably higher than static-graph models. Finally, we discuss how PDNs can be used to construct an easily interpretable attention mechanism that allows users to understand information propagation in the graph.
READ FULL TEXT
Comments
There are no comments yet.