RELIANT: Fair Knowledge Distillation for Graph Neural Networks

01/03/2023
by   Yushun Dong, et al.
0

Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.

READ FULL TEXT
research
10/24/2022

Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks

We study a new paradigm of knowledge transfer that aims at encoding grap...
research
11/04/2020

On Self-Distilling Graph Neural Network

Recently, the teacher-student knowledge distillation framework has demon...
research
03/04/2021

Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework

Semi-supervised learning on graphs is an important problem in the machin...
research
10/12/2022

Boosting Graph Neural Networks via Adaptive Knowledge Distillation

Graph neural networks (GNNs) have shown remarkable performance on divers...
research
05/16/2021

Graph-Free Knowledge Distillation for Graph Neural Networks

Knowledge distillation (KD) transfers knowledge from a teacher network t...
research
11/09/2021

On Representation Knowledge Distillation for Graph Neural Networks

Knowledge distillation is a promising learning paradigm for boosting the...
research
12/24/2022

T2-GNN: Graph Neural Networks for Graphs with Incomplete Features and Structure via Teacher-Student Distillation

Graph Neural Networks (GNNs) have been a prevailing technique for tackli...

Please sign up or login with your details

Forgot password? Click here to reset