SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional Network Accelerators

01/25/2023
by   Mingi Yoo, et al.
0

Graph convolutional networks (GCNs) are becoming increasingly popular as they overcome the limited applicability of prior neural networks. A GCN takes as input an arbitrarily structured graph and executes a series of layers which exploit the graph's structure to calculate their output features. One recent trend in GCNs is the use of deep network architectures. As opposed to the traditional GCNs which only span around two to five layers deep, modern GCNs now incorporate tens to hundreds of layers with the help of residual connections. From such deep GCNs, we find an important characteristic that they exhibit very high intermediate feature sparsity. We observe that with deep layers and residual connections, the number of zeros in the intermediate features sharply increases. This reveals a new opportunity for accelerators to exploit in GCN executions that was previously not present. In this paper, we propose SGCN, a fast and energy-efficient GCN accelerator which fully exploits the sparse intermediate features of modern GCNs. SGCN suggests several techniques to achieve significantly higher performance and energy efficiency than the existing accelerators. First, SGCN employs a GCN-friendly feature compression format. We focus on reducing the off-chip memory traffic, which often is the bottleneck for GCN executions. Second, we propose microarchitectures for seamlessly handling the compressed feature format. Third, to better handle locality in the existence of the varying sparsity, SGCN employs sparsity-aware cooperation. Sparsity-aware cooperation creates a pattern that exhibits multiple reuse windows, such that the cache can capture diverse sizes of working sets and therefore adapt to the varying level of sparsity. We show that SGCN achieves 1.71x speedup and 43.9 efficiency compared to the existing accelerators.

READ FULL TEXT

page 1

page 6

research
03/07/2022

I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization

Graph Convolutional Networks (GCNs) have drawn tremendous attention in t...
research
05/15/2022

COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional Networks

Graph convolutional networks (GCNs) have shown remarkable learning capab...
research
01/24/2023

Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators

Graph convolutional networks (GCNs) are becoming increasingly popular as...
research
03/01/2022

GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for Memory-Efficient Graph Convolutional Neural Networks

Graph convolutional neural networks (GCNs) have emerged as a key technol...
research
09/24/2019

Layerwise Relevance Visualization in Convolutional Text Graph Classifiers

Representations in the hidden layers of Deep Neural Networks (DNN) are o...
research
03/18/2021

Extending Sparse Tensor Accelerators to Support Multiple Compression Formats

Sparsity, which occurs in both scientific applications and Deep Learning...
research
01/23/2020

A Closer Look at Lightweight Graph Reordering

Graph analytics power a range of applications in areas as diverse as fin...

Please sign up or login with your details

Forgot password? Click here to reset