Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control

03/21/2023
by   Nathan Leroux, et al.
0

Transformers are state-of-the-art networks for most sequence processing tasks. However, the self-attention mechanism often used in Transformers requires large time windows for each computation step and thus makes them less suitable for online signal processing compared to Recurrent Neural Networks (RNNs). In this paper, instead of the self-attention mechanism, we use a sliding window attention mechanism. We show that this mechanism is more efficient for continuous signals with finite-range dependencies between input and target, and that we can use it to process sequences element-by-element, this making it compatible with online processing. We test our model on a finger position regression dataset (NinaproDB8) with Surface Electromyographic (sEMG) signals measured on the forearm skin to estimate muscle activities. Our approach sets the new state-of-the-art in terms of accuracy on this dataset while requiring only very short time windows of 3.5 ms at each inference step. Moreover, we increase the sparsity of the network using Leaky-Integrate and Fire (LIF) units, a bio-inspired neuron model that activates sparsely in time solely when crossing a threshold. We thus reduce the number of synaptic operations up to a factor of ×5.3 without loss of accuracy. Our results hold great promises for accurate and fast online processing of sEMG signals for smooth prosthetic hand control and is a step towards Transformers and Spiking Neural Networks (SNNs) co-integration for energy efficient temporal signal processing.

READ FULL TEXT
research
09/05/2022

Spiking GATs: Learning Graph Attentions via Spiking Neural Network

Graph Attention Networks (GATs) have been intensively studied and widely...
research
08/03/2021

Armour: Generalizable Compact Self-Attention for Vision Transformers

Attention-based transformer networks have demonstrated promising potenti...
research
10/03/2022

Efficient Spiking Transformer Enabled By Partial Information

Spiking neural networks (SNNs) have received substantial attention in re...
research
12/21/2021

Learned Queries for Efficient Local Attention

Vision Transformers (ViT) serve as powerful vision models. Unlike convol...
research
05/08/2022

SparseTT: Visual Tracking with Sparse Transformers

Transformers have been successfully applied to the visual tracking task ...
research
03/21/2023

The Multiscale Surface Vision Transformer

Surface meshes are a favoured domain for representing structural and fun...
research
04/20/2021

The principle of weight divergence facilitation for unsupervised pattern recognition in spiking neural networks

Parallels between the signal processing tasks and biological neurons lea...

Please sign up or login with your details

Forgot password? Click here to reset