HiHGNN: Accelerating HGNNs through Parallelism and Data Reusability Exploitation

07/24/2023
by   Runzhen Xue, et al.
0

Heterogeneous graph neural networks (HGNNs) have emerged as powerful algorithms for processing heterogeneous graphs (HetGs), widely used in many critical fields. To capture both structural and semantic information in HetGs, HGNNs first aggregate the neighboring feature vectors for each vertex in each semantic graph and then fuse the aggregated results across all semantic graphs for each vertex. Unfortunately, existing graph neural network accelerators are ill-suited to accelerate HGNNs. This is because they fail to efficiently tackle the specific execution patterns and exploit the high-degree parallelism as well as data reusability inside and across the processing of semantic graphs in HGNNs. In this work, we first quantitatively characterize a set of representative HGNN models on GPU to disclose the execution bound of each stage, inter-semantic-graph parallelism, and inter-semantic-graph data reusability in HGNNs. Guided by our findings, we propose a high-performance HGNN accelerator, HiHGNN, to alleviate the execution bound and exploit the newfound parallelism and data reusability in HGNNs. Specifically, we first propose a bound-aware stage-fusion methodology that tailors to HGNN acceleration, to fuse and pipeline the execution stages being aware of their execution bounds. Second, we design an independency-aware parallel execution design to exploit the inter-semantic-graph parallelism. Finally, we present a similarity-aware execution scheduling to exploit the inter-semantic-graph data reusability. Compared to the state-of-the-art software framework running on NVIDIA GPU T4 and GPU A100, HiHGNN respectively achieves an average 41.5× and 8.6× speedup as well as 106× and 73× energy efficiency with quarter the memory bandwidth of GPU A100.

READ FULL TEXT

page 8

page 11

page 14

page 15

research
01/07/2020

HyGCN: A GCN Accelerator with Hybrid Architecture

In this work, we first characterize the hybrid execution patterns of GCN...
research
07/15/2022

Multi-node Acceleration for Large-scale GCNs

Limited by the memory capacity and compute power, singe-node graph convo...
research
07/04/2023

GHOST: A Graph Neural Network Accelerator using Silicon Photonics

Graph neural networks (GNNs) have emerged as a powerful approach for mod...
research
04/26/2023

Acceleration for Timing-Aware Gate-Level Logic Simulation with One-Pass GPU Parallelism

Witnessing the advancing scale and complexity of chip design and benefit...
research
06/26/2021

GSmart: An Efficient SPARQL Query Engine Using Sparse Matrix Algebra – Full Version

Efficient execution of SPARQL queries over large RDF datasets is a topic...
research
09/23/2022

Faith: An Efficient Framework for Transformer Verification on GPUs

Transformer verification draws increasing attention in machine learning ...
research
06/03/2018

An Efficient Graph Accelerator with Parallel Data Conflict Management

Graph-specific computing with the support of dedicated accelerator has g...

Please sign up or login with your details

Forgot password? Click here to reset