Spikformer: When Spiking Neural Network Meets Transformer

09/29/2022
by   Zhaokun Zhou, et al.
0

We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism. The former offers an energy-efficient and event-driven paradigm for deep learning, while the latter has the ability to capture feature dependencies, enabling Transformer to achieve good performance. It is intuitively promising to explore the marriage between them. In this paper, we consider leveraging both self-attention capability and biological properties of SNNs, and propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer). The SSA mechanism in Spikformer models the sparse visual feature by using spike-form Query, Key, and Value without softmax. Since its computation is sparse and avoids multiplication, SSA is efficient and has low computational energy consumption. It is shown that Spikformer with SSA can outperform the state-of-the-art SNNs-like frameworks in image classification on both neuromorphic and static datasets. Spikformer (66.3M parameters) with comparable size to SEW-ResNet-152 (60.2M,69.26 ImageNet using 4 time steps, which is the state-of-the-art in directly trained SNNs models.

READ FULL TEXT
research
07/04/2023

Spike-driven Transformer

Spiking Neural Networks (SNNs) provide an energy-efficient deep learning...
research
04/24/2023

Spikingformer: Spike-driven Residual Learning for Transformer-based Spiking Neural Network

Spiking neural networks (SNNs) offer a promising energy-efficient altern...
research
09/05/2022

Spiking GATs: Learning Graph Attentions via Spiking Neural Network

Graph Attention Networks (GATs) have been intensively studied and widely...
research
10/03/2022

Efficient Spiking Transformer Enabled By Partial Information

Spiking neural networks (SNNs) have received substantial attention in re...
research
02/27/2023

SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks

As the size of large language models continue to scale, so does the comp...
research
06/01/2023

Auto-Spikformer: Spikformer Architecture Search

The integration of self-attention mechanisms into Spiking Neural Network...
research
11/10/2021

Attention Approximates Sparse Distributed Memory

While Attention has come to be an important mechanism in deep learning, ...

Please sign up or login with your details

Forgot password? Click here to reset