SGL-PT: A Strong Graph Learner with Graph Prompt Tuning

02/24/2023
by   Yun Zhu, et al.
0

Recently, much exertion has been paid to design graph self-supervised methods to obtain generalized pre-trained models, and adapt pre-trained models onto downstream tasks through fine-tuning. However, there exists an inherent gap between pretext and downstream graph tasks, which insufficiently exerts the ability of pre-trained models and even leads to negative transfer. Meanwhile, prompt tuning has seen emerging success in natural language processing by aligning pre-training and fine-tuning with consistent training objectives. In this paper, we identify the challenges for graph prompt tuning: The first is the lack of a strong and universal pre-training task across sundry pre-training methods in graph domain. The second challenge lies in the difficulty of designing a consistent training objective for both pre-training and downstream tasks. To overcome above obstacles, we propose a novel framework named SGL-PT which follows the learning strategy “Pre-train, Prompt, and Predict”. Specifically, we raise a strong and universal pre-training task coined as SGL that acquires the complementary merits of generative and contrastive self-supervised graph learning. And aiming for graph classification task, we unify pre-training and fine-tuning by designing a novel verbalizer-free prompting function, which reformulates the downstream task in a similar format as pretext task. Empirical results show that our method surpasses other baselines under unsupervised setting, and our prompt tuning method can greatly facilitate models on biological datasets over fine-tuning methods.

READ FULL TEXT

page 3

page 4

research
12/08/2021

Improving Knowledge Graph Representation Learning by Structure Contextual Pre-training

Representation learning models for Knowledge Graphs (KG) have proven to ...
research
03/29/2023

When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!

Recently, graph pre-training has attracted wide research attention, whic...
research
07/20/2023

Revisiting Fine-Tuning Strategies for Self-supervised Medical Imaging Analysis

Despite the rapid progress in self-supervised learning (SSL), end-to-end...
research
06/21/2023

Continual Learners are Incremental Model Generalizers

Motivated by the efficiency and rapid convergence of pre-trained models ...
research
07/04/2023

All in One: Multi-task Prompting for Graph Neural Networks

Recently, ”pre-training and fine-tuning” has been adopted as a standard ...
research
10/08/2021

RPT: Toward Transferable Model on Heterogeneous Researcher Data via Pre-Training

With the growth of the academic engines, the mining and analysis acquisi...
research
07/22/2019

Realistic Channel Models Pre-training

In this paper, we propose a neural-network-based realistic channel model...

Please sign up or login with your details

Forgot password? Click here to reset