Accelerating Recurrent Neural Networks for Gravitational Wave Experiments

06/26/2021
by   Zhiqiang Que, et al.
0

This paper presents novel reconfigurable architectures for reducing the latency of recurrent neural networks (RNNs) that are used for detecting gravitational waves. Gravitational interferometers such as the LIGO detectors capture cosmic events such as black hole mergers which happen at unknown times and of varying durations, producing time-series data. We have developed a new architecture capable of accelerating RNN inference for analyzing time-series data from LIGO detectors. This architecture is based on optimizing the initiation intervals (II) in a multi-layer LSTM (Long Short-Term Memory) network, by identifying appropriate reuse factors for each layer. A customizable template for this architecture has been designed, which enables the generation of low-latency FPGA designs with efficient resource utilization using high-level synthesis tools. The proposed approach has been evaluated based on two LSTM models, targeting a ZYNQ 7045 FPGA and a U250 FPGA. Experimental results show that with balanced II, the number of DSPs can be reduced up to 42 FPGA-based LSTM designs, our design can achieve about 4.92 to 12.4 times lower latency.

READ FULL TEXT
07/01/2022

Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml

Recurrent neural networks have been shown to be effective architectures ...
09/28/2022

LL-GNN: Low Latency Graph Neural Networks on FPGAs for Particle Detectors

This work proposes a novel reconfigurable architecture for low latency G...
11/17/2015

Recurrent Neural Networks Hardware Implementation on FPGA

Recurrent Neural Networks (RNNs) have the ability to retain memory and l...
10/15/2019

Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI

Recurrent neural networks (RNNs) were designed for dealing with time-ser...
05/31/2018

A Highly Parallel FPGA Implementation of Sparse Neural Network Training

We demonstrate an FPGA implementation of a parallel and reconfigurable a...
08/04/2021

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-temporal Sparsity

Long Short-Term Memory (LSTM) recurrent networks are frequently used for...
03/28/2018

An Efficient I/O Architecture for RAM-based Content-Addressable Memory on FPGA

Despite the impressive search rate of one key per clock cycle, the updat...

Code Repositories

RNN_HLS

LSTM template and example in Vivado HLS


view repo