Online Adversarial Distillation for Graph Neural Networks

12/28/2021
by   Can Wang, et al.
9

Knowledge distillation has recently become a popular technique to improve the model generalization ability on convolutional neural networks. However, its effect on graph neural networks is less than satisfactory since the graph topology and node attributes are likely to change in a dynamic way and in this case a static teacher model is insufficient in guiding student training. In this paper, we tackle this challenge by simultaneously training a group of graph neural networks in an online distillation fashion, where the group knowledge plays a role as a dynamic virtual teacher and the structure changes in graph neural networks are effectively captured. To improve the distillation performance, two types of knowledge are transferred among the students to enhance each other: local knowledge reflecting information in the graph topology and node attributes, and global knowledge reflecting the prediction over classes. We transfer the global knowledge with KL-divergence as the vanilla knowledge distillation does, while exploiting the complicated structure of the local knowledge with an efficient adversarial cyclic learning framework. Extensive experiments verified the effectiveness of our proposed online adversarial distillation approach.

READ FULL TEXT

page 1

page 9

research
07/25/2022

HIRE: Distilling High-order Relational Knowledge From Heterogeneous Graph Neural Networks

Researchers have recently proposed plenty of heterogeneous graph neural ...
research
06/10/2022

Transformer-Graph Neural Network with Global-Local Attention for Multimodal Rumour Detection with Knowledge Distillation

Misinformation spreading becomes a critical issue in online conversation...
research
10/25/2022

Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision

Graph neural networks (GNNs) have become one of the most popular researc...
research
09/10/2019

Knowledge Transfer Graph for Deep Collaborative Learning

We propose Deep Collaborative Learning (DCL), which is a method that inc...
research
11/09/2021

On Representation Knowledge Distillation for Graph Neural Networks

Knowledge distillation is a promising learning paradigm for boosting the...
research
06/05/2023

Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data

Graph condensation, which reduces the size of a large-scale graph by syn...
research
03/23/2020

Distillating Knowledge from Graph Convolutional Networks

Existing knowledge distillation methods focus on convolutional neural ne...

Please sign up or login with your details

Forgot password? Click here to reset