Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework

03/04/2021
by   Cheng Yang, et al.
16

Semi-supervised learning on graphs is an important problem in the machine learning area. In recent years, state-of-the-art classification methods based on graph neural networks (GNNs) have shown their superiority over traditional ones such as label propagation. However, the sophisticated architectures of these neural models will lead to a complex prediction mechanism, which could not make full use of valuable prior knowledge lying in the data, e.g., structurally correlated nodes tend to have the same class. In this paper, we propose a framework based on knowledge distillation to address the above issues. Our framework extracts the knowledge of an arbitrary learned GNN model (teacher model), and injects it into a well-designed student model. The student model is built with two simple prediction mechanisms, i.e., label propagation and feature transformation, which naturally preserves structure-based and feature-based prior knowledge, respectively. In specific, we design the student model as a trainable combination of parameterized label propagation and feature transformation modules. As a result, the learned student can benefit from both prior knowledge and the knowledge in GNN teachers for more effective predictions. Moreover, the learned student model has a more interpretable prediction process than GNNs. We conduct experiments on five public benchmark datasets and employ seven GNN models including GCN, GAT, APPNP, SAGE, SGC, GCNII and GLP as the teacher models. Experimental results show that the learned student model can consistently outperform its corresponding teacher model by 1.4 https://github.com/BUPT-GAMMA/CPF

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2023

RELIANT: Fair Knowledge Distillation for Graph Neural Networks

Graph Neural Networks (GNNs) have shown satisfying performance on variou...
research
05/05/2022

Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks

Existing knowledge distillation methods on graph neural networks (GNNs) ...
research
05/16/2021

Graph-Free Knowledge Distillation for Graph Neural Networks

Knowledge distillation (KD) transfers knowledge from a teacher network t...
research
04/20/2023

Train Your Own GNN Teacher: Graph-Aware Distillation on Textual Graphs

How can we learn effective node representations on textual graphs? Graph...
research
11/04/2020

On Self-Distilling Graph Neural Network

Recently, the teacher-student knowledge distillation framework has demon...
research
08/05/2022

PGX: A Multi-level GNN Explanation Framework Based on Separate Knowledge Distillation Processes

Graph Neural Networks (GNNs) are widely adopted in advanced AI systems d...
research
08/12/2021

Distilling Holistic Knowledge with Graph Neural Networks

Knowledge Distillation (KD) aims at transferring knowledge from a larger...

Please sign up or login with your details

Forgot password? Click here to reset