AdapterGNN: Efficient Delta Tuning Improves Generalization Ability in Graph Neural Networks

04/19/2023
by   Shengrui Li, et al.
0

Fine-tuning pre-trained models has recently yielded remarkable performance gains in graph neural networks (GNNs). In addition to pre-training techniques, inspired by the latest work in the natural language fields, more recent work has shifted towards applying effective fine-tuning approaches, such as parameter-efficient tuning (delta tuning). However, given the substantial differences between GNNs and transformer-based models, applying such approaches directly to GNNs proved to be less effective. In this paper, we present a comprehensive comparison of delta tuning techniques for GNNs and propose a novel delta tuning method specifically designed for GNNs, called AdapterGNN. AdapterGNN preserves the knowledge of the large pre-trained model and leverages highly expressive adapters for GNNs, which can adapt to downstream tasks effectively with only a few parameters, while also improving the model's generalization ability on the downstream tasks. Extensive experiments show that AdapterGNN achieves higher evaluation performance (outperforming full fine-tuning by 1.4 with only 5 full fine-tuning. Moreover, we empirically show that a larger GNN model can have a worse generalization ability, which differs from the trend observed in large language models. We have also provided a theoretical justification for delta tuning can improve the generalization ability of GNNs by applying generalization bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2023

Search to Fine-tune Pre-trained Graph Neural Networks for Graph-level Tasks

Recently, graph neural networks (GNNs) have shown its unprecedented succ...
research
12/02/2022

General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning

Parameter-efficient methods (like Prompt or Adapters) for adapting pre-t...
research
06/02/2023

Transfer learning for atomistic simulations using GNNs and kernel mean embeddings

Interatomic potentials learned using machine learning methods have been ...
research
03/20/2022

Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport

Recently, the pretrain-finetuning paradigm has attracted tons of attenti...
research
08/23/2022

Bag of Tricks for Out-of-Distribution Generalization

Recently, out-of-distribution (OOD) generalization has attracted attenti...
research
08/21/2022

MentorGNN: Deriving Curriculum for Pre-Training GNNs

Graph pre-training strategies have been attracting a surge of attention ...
research
05/16/2023

GIFT: Graph-Induced Fine-Tuning for Multi-Party Conversation Understanding

Addressing the issues of who saying what to whom in multi-party conversa...

Please sign up or login with your details

Forgot password? Click here to reset