When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!

03/29/2023
by   Yuxuan Cao, et al.
0

Recently, graph pre-training has attracted wide research attention, which aims to learn transferable knowledge from unlabeled graph data so as to improve downstream performance. Despite these recent attempts, the negative transfer is a major issue when applying graph pre-trained models to downstream tasks. Existing works made great efforts on the issue of what to pre-train and how to pre-train by designing a number of graph pre-training and fine-tuning strategies. However, there are indeed cases where no matter how advanced the strategy is, the "pre-train and fine-tune" paradigm still cannot achieve clear benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of when to pre-train (i.e., in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (i.e., a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN provides three broad applications, including providing the application scope of graph pre-trained models, quantifying the feasibility of performing pre-training, and helping select pre-training data to enhance downstream performance. We give a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2023

SGL-PT: A Strong Graph Learner with Graph Prompt Tuning

Recently, much exertion has been paid to design graph self-supervised me...
research
06/20/2023

Masked Diffusion Models are Fast Learners

Diffusion models have emerged as the de-facto technique for image genera...
research
08/14/2023

Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level Tasks

Recently, graph neural networks (GNNs) have shown its unprecedented succ...
research
08/19/2023

Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks

Voucher abuse detection is an important anomaly detection problem in E-c...
research
07/27/2022

Leveraging GAN Priors for Few-Shot Part Segmentation

Few-shot part segmentation aims to separate different parts of an object...
research
11/03/2022

Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation

Recent advances in large-scale pre-training provide large models with th...
research
06/07/2023

Randomized 3D Scene Generation for Generalizable Self-supervised Pre-training

Capturing and labeling real-world 3D data is laborious and time-consumin...

Please sign up or login with your details

Forgot password? Click here to reset