Towards Sparsification of Graph Neural Networks

09/11/2022
by   Hongwu Peng, et al.
0

As real-world graphs expand in size, larger GNN models with billions of parameters are deployed. High parameter count in such models makes training and inference on graphs expensive and challenging. To reduce the computational and memory costs of GNNs, optimization methods such as pruning the redundant nodes and edges in input graphs have been commonly adopted. However, model compression, which directly targets the sparsification of model layers, has been mostly limited to traditional Deep Neural Networks (DNNs) used for tasks such as image classification and object detection. In this paper, we utilize two state-of-the-art model compression methods (1) train and prune and (2) sparse training for the sparsification of weight layers in GNNs. We evaluate and compare the efficiency of both methods in terms of accuracy, training sparsity, and training FLOPs on real-world graphs. Our experimental results show that on the ia-email, wiki-talk, and stackoverflow datasets for link prediction, sparse training with much lower training FLOPs achieves a comparable accuracy with the train and prune method. On the brain dataset for node classification, sparse training uses a lower number FLOPs (less than 1/7 FLOPs of train and prune method) and preserves a much better accuracy performance under extreme model sparsity.

READ FULL TEXT
research
11/10/2020

Two-stage Training of Graph Neural Networks for Graph Classification

Graph Neural Networks (GNNs) have received massive attention in the fiel...
research
08/15/2022

ROLAND: Graph Learning Framework for Dynamic Graphs

Graph Neural Networks (GNNs) have been successfully applied to many real...
research
08/06/2022

Triple Sparsification of Graph Convolutional Networks without Sacrificing the Accuracy

Graph Neural Networks (GNNs) are widely used to perform different machin...
research
11/23/2017

Deep Expander Networks: Efficient Deep Networks from Graph Theory

Deep Neural Networks, while being unreasonably effective for several vis...
research
12/02/2019

Sparse Graph Attention Networks

Graph Neural Networks (GNNs) have proved to be an effective representati...
research
06/07/2023

Fast and Effective GNN Training with Linearized Random Spanning Trees

We present a new effective and scalable framework for training GNNs in s...

Please sign up or login with your details

Forgot password? Click here to reset