Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design

08/16/2023
by   Shuwen Lu, et al.
0

Graph neural networks (GNNs) have shown significant accuracy improvements in a variety of graph learning domains, sparking considerable research interest. To translate these accuracy improvements into practical applications, it is essential to develop high-performance and efficient hardware acceleration for GNN models. However, designing GNN accelerators faces two fundamental challenges: the high bandwidth requirement of GNN models and the diversity of GNN models. Previous works have addressed the first challenge by using more expensive memory interfaces to achieve higher bandwidth. For the second challenge, existing works either support specific GNN models or have generic designs with poor hardware utilization. In this work, we tackle both challenges simultaneously. First, we identify a new type of partition-level operator fusion, which we utilize to internally reduce the high bandwidth requirement of GNNs. Next, we introduce partition-level multi-threading to schedule the concurrent processing of graph partitions, utilizing different hardware resources. To further reduce the extra on-chip memory required by multi-threading, we propose fine-grained graph partitioning to generate denser graph partitions. Importantly, these three methods make no assumptions about the targeted GNN models, addressing the challenge of model variety. We implement these methods in a framework called SwitchBlade, consisting of a compiler, a graph partitioner, and a hardware accelerator. Our evaluation demonstrates that SwitchBlade achieves an average speedup of 1.85× and energy savings of 19.03× compared to the NVIDIA V100 GPU. Additionally, SwitchBlade delivers performance comparable to state-of-the-art specialized accelerators.

READ FULL TEXT

page 1

page 7

page 9

page 12

research
07/19/2021

ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration

Graph neural networks (GNNs) start to gain momentum after showing signif...
research
08/31/2019

EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks

Inspired by the great success of convolutional neural networks on struct...
research
03/20/2023

Hardware-Aware Graph Neural Network Automated Design for Edge Computing Platforms

Graph neural networks (GNNs) have emerged as a popular strategy for hand...
research
02/02/2022

Efficient Memory Partitioning in Software Defined Hardware

As programmers turn to software-defined hardware (SDH) to maintain a hig...
research
01/20/2022

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

Graph neural networks (GNNs) have recently exploded in popularity thanks...
research
03/18/2021

Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach

Relational data present in real world graph representations demands for ...
research
05/10/2022

SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures

Graph neural networks (GNNs) can extract features by learning both the r...

Please sign up or login with your details

Forgot password? Click here to reset