Operation-Level Performance Benchmarking of Graph Neural Networks for Scientific Applications

07/20/2022
by   Ryien Hosseini, et al.
0

As Graph Neural Networks (GNNs) increase in popularity for scientific machine learning, their training and inference efficiency is becoming increasingly critical. Additionally, the deep learning field as a whole is trending towards wider and deeper networks, and ever increasing data sizes, to the point where hard hardware bottlenecks are often encountered. Emerging specialty hardware platforms provide an exciting solution to this problem. In this paper, we systematically profile and select low-level operations pertinent to GNNs for scientific computing implemented in the Pytorch Geometric software framework. These are then rigorously benchmarked on NVIDIA A100 GPUs for several various combinations of input values, including tensor sparsity. We then analyze these results for each operation. At a high level, we conclude that on NVIDIA systems: (1) confounding bottlenecks such as memory inefficiency often dominate runtime costs moreso than data sparsity alone, (2) native Pytorch operations are often as or more competitive than their Pytorch Geometric equivalents, especially at low to moderate levels of input data sparsity, and (3) many operations central to state-of-the-art GNN architectures have little to no optimization for sparsity. We hope that these results serve as a baseline for those developing these operations on specialized hardware and that our subsequent analysis helps to facilitate future software and hardware based optimizations of these operations and thus scalable GNN performance as a whole.

READ FULL TEXT
research
09/30/2020

Computing Graph Neural Networks: A Survey from Algorithms to Accelerators

Graph Neural Networks (GNNs) have exploded onto the machine learning sce...
research
06/11/2020

GNNAdvisor: An Efficient Runtime System for GNN Acceleration on GPUs

As the emerging trend of the graph-based deep learning, Graph Neural Net...
research
06/27/2023

Input-sensitive dense-sparse primitive compositions for GNN acceleration

Graph neural networks (GNN) have become an important class of neural net...
research
07/19/2021

ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration

Graph neural networks (GNNs) start to gain momentum after showing signif...
research
08/17/2022

Embracing Graph Neural Networks for Hardware Security (Invited Paper)

Graph neural networks (GNNs) have attracted increasing attention due to ...
research
05/27/2023

AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs

Graph neural networks (GNNs) are powerful tools for exploring and learni...
research
01/23/2022

Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs

Graph neural networks (GNNs) process large-scale graphs consisting of a ...

Please sign up or login with your details

Forgot password? Click here to reset