GIPA: General Information Propagation Algorithm for Graph Learning

05/13/2021
by   Qinkai Zheng, et al.
0

Graph neural networks (GNNs) have been popularly used in analyzing graph-structured data, showing promising results in various applications such as node classification, link prediction and network recommendation. In this paper, we present a new graph attention neural network, namely GIPA, for attributed graph data learning. GIPA consists of three key components: attention, feature propagation and aggregation. Specifically, the attention component introduces a new multi-layer perceptron based multi-head to generate better non-linear feature mapping and representation than conventional implementations such as dot-product. The propagation component considers not only node features but also edge features, which differs from existing GNNs that merely consider node features. The aggregation component uses a residual connection to generate the final embedding. We evaluate the performance of GIPA using the Open Graph Benchmark proteins (ogbn-proteins for short) dataset. The experimental results reveal that GIPA can beat the state-of-the-art models in terms of prediction accuracy, e.g., GIPA achieves an average ROC-AUC of 0.8700± 0.0010 and outperforms all the previous methods listed in the ogbn-proteins leaderboard.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/09/2022

Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link Prediction

Graph Neural Networks (GNNs) have been widely applied to various fields ...
11/23/2021

On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features

While Graph Neural Networks (GNNs) have recently become the de facto sta...
10/16/2019

Active Learning for Graph Neural Networks via Node Feature Propagation

Graph Neural Networks (GNNs) for prediction tasks like node classificati...
08/23/2021

Graph Attention Multi-Layer Perceptron

Graph neural networks (GNNs) have recently achieved state-of-the-art per...
06/23/2022

Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs

3D-related inductive biases like translational invariance and rotational...
02/18/2022

Generalizing Aggregation Functions in GNNs:High-Capacity GNNs via Nonlinear Neighborhood Aggregators

Graph neural networks (GNNs) have achieved great success in many graph l...
04/17/2018

Feature Propagation on Graph: A New Perspective to Graph Representation Learning

We study feature propagation on graph, an inference process involved in ...