LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors

09/28/2022
by   Zhiqiang Que, et al.
0

This work proposes a novel reconfigurable architecture for low latency Graph Neural Network (GNN) design specifically for particle detectors. Adopting FPGA-based GNNs for particle detectors is challenging since it requires sub-microsecond latency to deploy the networks for online event selection in the Level-1 triggers for the CERN Large Hadron Collider experiments. This paper proposes a custom code transformation with strength reduction for the matrix multiplication operations in the interaction-network based GNNs with fully connected graphs, which avoids the costly multiplication. It exploits sparsity patterns as well as binary adjacency matrices, and avoids irregular memory access, leading to a reduction in latency and improvement in hardware efficiency. In addition, we introduce an outer-product based matrix multiplication approach which is enhanced by the strength reduction for low latency design. Also, a fusion step is introduced to further reduce the design latency. Furthermore, an GNN-specific algorithm-hardware co-design approach is presented which not only finds a design with a much better latency but also finds a high accuracy design under a given latency constraint. Finally, a customizable template for this low latency GNN hardware architecture has been designed and open-sourced, which enables the generation of low-latency FPGA designs with efficient resource utilization using a high-level synthesis tool. Evaluation results show that our FPGA implementation is up to 24 times faster and consumes up to 45 times less power than a GPU implementation. Compared to our previous FPGA implementations, this work achieves 6.51 to 16.7 times lower latency. Moreover, the latency of our FPGA design is sufficiently low to enable deployment of GNNs in a sub-microsecond, real-time collider trigger system, enabling it to benefit from improved accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2022

Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform

Mini-batch inference of Graph Neural Networks (GNNs) is a key problem in...
research
06/20/2023

Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs

In-time particle trajectory reconstruction in the Large Hadron Collider ...
research
04/03/2021

Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions

Training and deploying graph neural networks (GNNs) remains difficult du...
research
03/10/2018

Efficient FPGA Implementation of Conjugate Gradient Methods for Laplacian System using HLS

In this paper, we study FPGA based pipelined and superscalar design of t...
research
06/26/2021

Accelerating Recurrent Neural Networks for Gravitational Wave Experiments

This paper presents novel reconfigurable architectures for reducing the ...
research
01/25/2022

PowerGear: Early-Stage Power Estimation in FPGA HLS via Heterogeneous Edge-Centric GNNs

Power estimation is the basis of many hardware optimization strategies. ...
research
01/21/2021

Direct Spatial Implementation of Sparse Matrix Multipliers for Reservoir Computing

Reservoir computing systems rely on the recurrent multiplication of a ve...

Please sign up or login with your details

Forgot password? Click here to reset