RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations

10/19/2022
by   Zirui Liu, et al.
0

The training of graph neural networks (GNNs) is extremely time consuming because sparse graph-based operations are hard to be accelerated by hardware. Prior art explores trading off the computational precision to reduce the time complexity via sampling-based approximation. Based on the idea, previous works successfully accelerate the dense matrix based operations (e.g., convolution and linear) with negligible accuracy drop. However, unlike dense matrices, sparse matrices are stored in the irregular data format such that each row/column may have different number of non-zero entries. Thus, compared to the dense counterpart, approximating sparse operations has two unique challenges (1) we cannot directly control the efficiency of approximated sparse operation since the computation is only executed on non-zero entries; (2) sub-sampling sparse matrices is much more inefficient due to the irregular data format. To address the issues, our key idea is to control the accuracy-efficiency trade off by optimizing computation resource allocation layer-wisely and epoch-wisely. Specifically, for the first challenge, we customize the computation resource to different sparse operations, while limit the total used resource below a certain budget. For the second challenge, we cache previous sampled sparse matrices to reduce the epoch-wise sampling overhead. Finally, we propose a switching mechanisms to improve the generalization of GNNs trained with approximated operations. To this end, we propose Randomized Sparse Computation, which for the first time demonstrate the potential of training GNNs with approximated operations. In practice, rsc can achieve up to 11.6× speedup for a single sparse operation and a 1.6× end-to-end wall-clock time speedup with negligible accuracy drop.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2021

TC-GNN: Accelerating Sparse Graph Neural Network Computation Via Dense Tensor Core on GPUs

Recently, graph neural networks (GNNs), as the backbone of graph-based m...
research
07/07/2020

GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks

Graph Neural Networks (GNNs) have achieved significant improvements in v...
research
05/07/2020

Reducing Communication in Graph Neural Network Training

Graph Neural Networks (GNNs) are powerful and flexible neural networks t...
research
05/05/2021

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks

Machine Learning as a Service (MLaaS) is enabling a wide range of smart ...
research
07/26/2023

Observe Locally, Classify Globally: Using GNNs to Identify Sparse Matrix Structure

The performance of sparse matrix computation highly depends on the match...
research
10/23/2019

SMASH: Co-designing Software Compression and Hardware-Accelerated Indexing for Efficient Sparse Matrix Operations

Important workloads, such as machine learning and graph analytics applic...

Please sign up or login with your details

Forgot password? Click here to reset