Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions

04/03/2021
by   Shyam A. Tailor, et al.
0

Training and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency. In this work we present a new type of GNN architecture that achieves state-of-the-art performance with lower memory consumption and latency, along with characteristics suited to accelerator implementation. Our proposal uses memory proportional to the number of vertices in the graph, in contrast to competing methods which require memory proportional to the number of edges; we find our efficient approach actually achieves higher accuracy than competing approaches across 5 large and varied datasets against strong baselines. We achieve our results by using a novel adaptive filtering approach inspired by signal processing; it can be interpreted as enabling each vertex to have its own weight matrix, and is not related to attention. Following our focus on efficient hardware usage, we propose aggregator fusion, a technique to enable GNNs to significantly boost their representational power, with only a small increase in latency of 19 standard sparse matrix multiplication. Code and pretrained models can be found at this URL: https://github.com/shyam196/egc.

READ FULL TEXT
research
09/28/2022

LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors

This work proposes a novel reconfigurable architecture for low latency G...
research
02/02/2023

GraphAGILE: An FPGA-based Overlay Accelerator for Low-latency GNN Inference

This paper presents GraphAGILE, a domain-specific FPGA-based overlay acc...
research
06/17/2022

Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform

Mini-batch inference of Graph Neural Networks (GNNs) is a key problem in...
research
06/25/2020

Incremental Training of Graph Neural Networks on Temporal Graphs under Distribution Shift

Current graph neural networks (GNNs) are promising, especially when the ...
research
05/04/2021

VersaGNN: a Versatile accelerator for Graph neural networks

Graph Neural Network (GNN) is a promising approach for analyzing graph-s...
research
11/11/2021

Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs

We present the Sequential Aggregation and Rematerialization (SAR) scheme...
research
05/10/2023

Towards Better Graph Representation Learning with Parameterized Decomposition Filtering

Proposing an effective and flexible matrix to represent a graph is a fun...

Please sign up or login with your details

Forgot password? Click here to reset