Adaptive Transfer Learning on Graph Neural Networks

07/19/2021
by   Xueting Han, et al.
16

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

READ FULL TEXT
research
03/01/2021

Self-supervised Auxiliary Learning for Graph Neural Networks via Meta-Learning

In recent years, graph neural networks (GNNs) have been widely adopted i...
research
05/20/2022

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors

Deep learning is increasingly moving towards a transfer learning paradig...
research
05/26/2022

Understanding new tasks through the lens of training data via exponential tilting

Deploying machine learning models to new tasks is a major challenge desp...
research
08/25/2021

Auxiliary Task Update Decomposition: The Good, The Bad and The Neutral

While deep learning has been very beneficial in data-rich settings, task...
research
10/05/2022

Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization

Self-supervised learning (SSL) for graph neural networks (GNNs) has attr...
research
12/23/2022

Principled and Efficient Transfer Learning of Deep Models via Neural Collapse

With the ever-growing model size and the limited availability of labeled...
research
11/03/2020

Meta-learning Transferable Representations with a Single Target Domain

Recent works found that fine-tuning and joint training—two popular appro...

Please sign up or login with your details

Forgot password? Click here to reset