GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks

07/07/2020
by   Guyue Huang, et al.
0

Graph Neural Networks (GNNs) have achieved significant improvements in various domains. Sparse Matrix-Matrix multiplication (SpMM) is a fundamental operator in GNNs, which performs a multiplication between a sparse matrix and a dense matrix. Accelerating SpMM on parallel hardware like GPUs can face the following challenges: From the GNN application perspective, the compatibility needs to be considered. General GNN algorithms require SpMM-like operations (e.g., pooling) between matrices, which are not supported in current high-performance GPU libraries (e.g., Nvidia cuSPARSE). Moreover, the sophisticated preprocessing in previous implementations will lead to heavy data format conversion overheads in GNN frameworks. From the GPU hardware perspective, optimizations in SpMV (Sparse Matrix-Vector) designs on GPUs do not apply well to SpMM. SpMM exposes the column-wise parallelism in the dense output matrix, but straightforward generalization from SpMV leads to inefficient, uncoalesced access to sparse matrix in global memory. The sparse row data can be reused among GPU threads, which is neither possible in SpMM designs inherited from SpMV. To tackle these challenges, we propose GE-SpMM. GE-SpMM performs SpMM-like operation on sparse matrices represented in the most common Compressed Sparse Row (CSR) format, so it can be embedded in GNN frameworks with no preprocessing overheads and support general GNN algorithms. We introduce the Coalesced Row Caching method to process columns in parallel and ensure coalesced access to sparse matrix data. We also present the Coarse-grained Warp Merging to reduce redundant data loading among GPU warps. Experiments on a real-world graph dataset show that GE-SpMM achieves up to 1.41X speedup over Nvidia cuSPARSE and up to 1.81X over GraphBLAST. We also embed GE-SpMM in GNN frameworks and get up to 3.67X speedup over popular GNN models like GCN and GraphSAGE.

READ FULL TEXT
research
03/22/2018

Design Principles for Sparse Matrix Multiplication on the GPU

We implement two novel algorithms for sparse-matrix dense-matrix multipl...
research
04/26/2023

SCV-GNN: Sparse Compressed Vector-based Graph Neural Network Aggregation

Graph neural networks (GNNs) have emerged as a powerful tool to process ...
research
05/07/2020

Reducing Communication in Graph Neural Network Training

Graph Neural Networks (GNNs) are powerful and flexible neural networks t...
research
05/04/2021

VersaGNN: a Versatile accelerator for Graph neural networks

Graph Neural Network (GNN) is a promising approach for analyzing graph-s...
research
10/30/2021

Optimizing Sparse Matrix Multiplications for Graph Neural Networks

Graph neural networks (GNNs) are emerging as a powerful technique for mo...
research
10/19/2022

RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations

The training of graph neural networks (GNNs) is extremely time consuming...
research
06/30/2021

Efficient Sparse Matrix Kernels based on Adaptive Workload-Balancing and Parallel-Reduction

Sparse matrix-vector and matrix-matrix multiplication (SpMV and SpMM) ar...

Please sign up or login with your details

Forgot password? Click here to reset