Spiking GATs: Learning Graph Attentions via Spiking Neural Network

09/05/2022
by   Beibei Wang, et al.
0

Graph Attention Networks (GATs) have been intensively studied and widely used in graph data learning tasks. Existing GATs generally adopt the self-attention mechanism to conduct graph edge attention learning, requiring expensive computation. It is known that Spiking Neural Networks (SNNs) can perform inexpensive computation by transmitting the input signal data into discrete spike trains and can also return sparse outputs. Inspired by the merits of SNNs, in this work, we propose a novel Graph Spiking Attention Network (GSAT) for graph data representation and learning. In contrast to self-attention mechanism in existing GATs, the proposed GSAT adopts a SNN module architecture which is obvious energy-efficient. Moreover, GSAT can return sparse attention coefficients in natural and thus can perform feature aggregation on the selective neighbors which makes GSAT perform robustly w.r.t graph edge noises. Experimental results on several datasets demonstrate the effectiveness, energy efficiency and robustness of the proposed GSAT model.

READ FULL TEXT
research
09/29/2022

Spikformer: When Spiking Neural Network Meets Transformer

We consider two biologically plausible structures, the Spiking Neural Ne...
research
03/21/2023

Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

Transformers are state-of-the-art networks for most sequence processing ...
research
05/05/2022

Spiking Graph Convolutional Networks

Graph Convolutional Networks (GCNs) achieve an impressive performance du...
research
10/03/2022

Efficient Spiking Transformer Enabled By Partial Information

Spiking neural networks (SNNs) have received substantial attention in re...
research
05/25/2023

Optimization and Interpretability of Graph Attention Networks for Small Sparse Graph Structures in Automotive Applications

For automotive applications, the Graph Attention Network (GAT) is a prom...
research
02/27/2023

SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks

As the size of large language models continue to scale, so does the comp...
research
10/21/2021

FDGATII : Fast Dynamic Graph Attention with Initial Residual and Identity Mapping

While Graph Neural Networks have gained popularity in multiple domains, ...

Please sign up or login with your details

Forgot password? Click here to reset