EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks

08/31/2019
by   Lei He, et al.
0

Inspired by the great success of convolutional neural networks on structural data like videos and images, graph neural network (GNN) emerges as a powerful approach to process non-euclidean data structures and has been proved powerful in various application domains such as social network, e-commerce, and knowledge graph. However, such graph data maintained in IT companies can be extremely large and sparse, thus employing GNNs to deal with them requires substantial computational power and memory bandwidth, which induces the considerable energy and resources cost spent on general-purpose CPUs and GPUs. In addition, GNN operating on irregular graphs can hardly be fitted to the conventional neural network accelerators or graph processors, which are not designed to support the computation abstraction of GNNs. This work presents a specialized accelerator architecture, EnGN, to enable high-throughput and energy-efficient processing of large-scale graph neural networks. The proposed EnGN is designed to accelerate the three key stages of GNN propagation, which is abstracted as common computing patterns shared by typical GNNs. To support the key stages simultaneously, we propose the ring-edge-reduce(RER) dataflow that tames the poor locality of sparsely-and-randomly connected vertices, and the RER PE-array to practice RER dataflow. In addition, we utilize a graph tiling strategy to fit large graphs into EnGN and make the best use of the hierarchical on-chip buffers through adaptive computation reordering and tile scheduling. The experiments on representative GNN models with the input of realistic graphs shows that EnGN achieves 303.45x and 4.44x performance speedup while consuming 1370.52x and 93.73x less energy on average when compared to the CPU and GPU baselines empowered by the state-of-the-art software frameworks, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 7

page 8

page 12

page 13

page 14

research
07/04/2023

GHOST: A Graph Neural Network Accelerator using Silicon Photonics

Graph neural networks (GNNs) have emerged as a powerful approach for mod...
research
01/23/2022

Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs

Graph neural networks (GNNs) process large-scale graphs consisting of a ...
research
08/16/2023

Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design

Graph neural networks (GNNs) have shown significant accuracy improvement...
research
10/19/2018

Towards Efficient Large-Scale Graph Neural Network Computing

Recent deep learning models have moved beyond low-dimensional regular gr...
research
05/04/2021

VersaGNN: a Versatile accelerator for Graph neural networks

Graph Neural Network (GNN) is a promising approach for analyzing graph-s...
research
02/23/2022

Alleviating Datapath Conflicts and Design Centralization in Graph Analytics Acceleration

Previous graph analytics accelerators have achieved great improvement on...
research
04/26/2023

SCV-GNN: Sparse Compressed Vector-based Graph Neural Network Aggregation

Graph neural networks (GNNs) have emerged as a powerful tool to process ...

Please sign up or login with your details

Forgot password? Click here to reset