Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks

07/18/2022
by   Chuang Liu, et al.
4

Graph Neural Networks (GNNs) tend to suffer from high computation costs due to the exponentially increasing scale of graph data and the number of model parameters, which restricts their utility in practical applications. To this end, some recent works focus on sparsifying GNNs with the lottery ticket hypothesis (LTH) to reduce inference costs while maintaining performance levels. However, the LTH-based methods suffer from two major drawbacks: 1) they require exhaustive and iterative training of dense models, resulting in an extremely large training computation cost, and 2) they only trim graph structures and model parameters but ignore the node feature dimension, where significant redundancy exists. To overcome the above limitations, we propose a comprehensive graph gradual pruning framework termed CGP. This is achieved by designing a during-training graph pruning paradigm to dynamically prune GNNs within one training process. Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs. Furthermore, we design a co-sparsifying strategy to comprehensively trim all three core elements of GNNs: graph structures, node features, and model parameters. Meanwhile, aiming at refining the pruning operation, we introduce a regrowth process into our CGP framework, in order to re-establish the pruned but important connections. The proposed CGP is evaluated by using a node classification task across 6 GNN architectures, including shallow models (GCN and GAT), shallow-but-deep-propagation models (SGC and APPNP), and deep models (GCNII and ResGCN), on a total of 14 real-world graph datasets, including large-scale graph datasets from the challenging Open Graph Benchmark. Experiments reveal that our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.

READ FULL TEXT

page 1

page 2

research
06/12/2020

Effective Training Strategies for Deep Graph Neural Networks

Graph Neural Networks (GNNs) tend to suffer performance degradation as m...
research
05/10/2021

Accelerating Large Scale Real-Time GNN Inference using Channel Pruning

Graph Neural Networks (GNNs) are proven to be powerful models to generat...
research
02/25/2020

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

Graph Neural Networks (GNNs) have made significant advances on several f...
research
08/05/2023

Adversarial Erasing with Pruned Elements: Towards Better Graph Lottery Ticket

Graph Lottery Ticket (GLT), a combination of core subgraph and sparse su...
research
10/01/2022

Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph Representation Learning

Even pruned by the state-of-the-art network compression methods, Graph N...
research
09/02/2022

Rethinking Efficiency and Redundancy in Training Large-scale Graphs

Large-scale graphs are ubiquitous in real-world scenarios and can be tra...
research
11/01/2022

Efficient Graph Neural Network Inference at Large Scale

Graph neural networks (GNNs) have demonstrated excellent performance in ...

Please sign up or login with your details

Forgot password? Click here to reset