-
Graphon Neural Networks and the Transferability of Graph Neural Networks
Graph neural networks (GNNs) rely on graph convolutions to extract local...
read it
-
Pre-train and Learn: Preserve Global Information for Graph Neural Networks
Graph neural networks (GNNs) have shown great power in learning on attri...
read it
-
GPT-GNN: Generative Pre-Training of Graph Neural Networks
Graph neural networks (GNNs) have been demonstrated to be powerful in mo...
read it
-
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
It is known that the current graph neural networks (GNNs) are difficult ...
read it
-
Graph Neural Networks with Heterophily
Graph Neural Networks (GNNs) have proven to be useful for many different...
read it
-
Finding Patient Zero: Learning Contagion Source with Graph Neural Networks
Locating the source of an epidemic, or patient zero (P0), can provide cr...
read it
-
Neural Network Branching for Neural Network Verification
Formal verification of neural networks is essential for their deployment...
read it
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization
Graph neural networks (GNNs) have been shown with superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards the transferability of GNNs. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of Ours, a novel GNN framework based on ego-graph information maximization to analytically achieve this goal. Secondly, we specify the requirement of structure-respecting node features as the GNN input, and derive a rigorous bound of GNN transferability based on the difference between the local graph Laplacians of the source and target graphs. Finally, we conduct controlled synthetic experiments to directly justify our theoretical conclusions. Extensive experiments on real-world networks towards role identification show consistent results in the rigorously analyzed setting of direct-transfering, while those towards large-scale relation prediction show promising results in the more generalized and practical setting of transfering with fine-tuning.
READ FULL TEXT
Comments
There are no comments yet.