Event-based Human Pose Tracking by Spiking Spatiotemporal Transformer

03/16/2023
by   Shihao Zou, et al.
0

Event camera, as an emerging biologically-inspired vision sensor for capturing motion dynamics, presents new potential for 3D human pose tracking, or video-based 3D human pose estimation. However, existing works in pose tracking either require the presence of additional gray-scale images to establish a solid starting pose, or ignore the temporal dependencies all together by collapsing segments of event streams to form static image frames. Meanwhile, although the effectiveness of Artificial Neural Networks (ANNs, a.k.a. dense deep learning) has been showcased in many event-based tasks, the use of ANNs tends to neglect the fact that compared to the dense frame-based image sequences, the occurrence of events from an event camera is spatiotemporally much sparser. Motivated by the above mentioned issues, we present in this paper a dedicated end-to-end sparse deep learning approach for event-based pose tracking: 1) to our knowledge this is the first time that 3D human pose tracking is obtained from events only, thus eliminating the need of accessing to any frame-based images as part of input; 2) our approach is based entirely upon the framework of Spiking Neural Networks (SNNs), which consists of Spike-Element-Wise (SEW) ResNet and our proposed spiking spatiotemporal transformer; 3) a large-scale synthetic dataset is constructed that features a broad and diverse set of annotated 3D human motions, as well as longer hours of event stream data, named SynEventHPD. Empirical experiments demonstrate the superiority of our approach in both performance and efficiency measures. For example, with comparable performance to the state-of-the-art ANNs counterparts, our approach achieves a computation reduction of 20% in FLOPS. Our implementation is made available at https://github.com/JimmyZou/HumanPoseTracking_SNN and dataset will be released upon paper acceptance.

READ FULL TEXT

page 6

page 10

page 12

page 17

research
04/21/2021

Lifting Monocular Events to 3D Human Poses

This paper presents a novel 3D human pose estimation approach using a si...
research
06/09/2022

Efficient Human Pose Estimation via 3D Event Point Cloud

Human Pose Estimation (HPE) based on RGB images has experienced a rapid ...
research
04/21/2020

Decoupling Video and Human Motion: Towards Practical Event Detection in Athlete Recordings

In this paper we address the problem of motion event detection in athlet...
research
08/15/2021

EventHPE: Event-based 3D Human Pose and Shape Estimation

Event camera is an emerging imaging sensor for capturing dynamics of mov...
research
08/08/2023

SSTFormer: Bridging Spiking Neural Network and Memory Support Transformer for Frame-Event based Recognition

Event camera-based pattern recognition is a newly arising research topic...
research
07/09/2022

Snipper: A Spatiotemporal Transformer for Simultaneous Multi-Person 3D Pose Estimation Tracking and Forecasting on a Video Snippet

Multi-person pose understanding from RGB videos includes three complex t...
research
02/16/2022

Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones

This paper demonstrates for the first time that a biologically-plausible...

Please sign up or login with your details

Forgot password? Click here to reset