Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition

07/07/2023
by   Jianyuan Ni, et al.
0

Wearable sensor-based Human Action Recognition (HAR) has made significant strides in recent times. However, the accuracy performance of wearable sensor-based HAR is currently still lagging behind that of visual modalities-based systems, such as RGB video and depth data. Although diverse input modalities can provide complementary cues and improve the accuracy performance of HAR, wearable devices can only capture limited kinds of non-visual time series input, such as accelerometers and gyroscopes. This limitation hinders the deployment of multimodal simultaneously using visual and non-visual modality data in parallel on current wearable devices. To address this issue, we propose a novel Physical-aware Cross-modal Adversarial (PCA) framework that utilizes only time-series accelerometer data from four inertial sensors for the wearable sensor-based HAR problem. Specifically, we propose an effective IMU2SKELETON network to produce corresponding synthetic skeleton joints from accelerometer data. Subsequently, we imposed additional constraints on the synthetic skeleton data from a physical perspective, as accelerometer data can be regarded as the second derivative of the skeleton sequence coordinates. After that, the original accelerometer as well as the constrained skeleton sequence were fused together to make the final classification. In this way, when individuals wear wearable devices, the devices can not only capture accelerometer data, but can also generate synthetic skeleton sequences for real-time wearable sensor-based HAR applications that need to be conducted anytime and anywhere. To demonstrate the effectiveness of our proposed PCA framework, we conduct extensive experiments on Berkeley-MHAD, UTD-MHAD, and MMAct datasets. The results confirm that the proposed PCA approach has competitive performance compared to the previous methods on the mono sensor-based HAR classification problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2022

Progressive Cross-modal Knowledge Distillation for Human Action Recognition

Wearable sensor-based Human Action Recognition (HAR) has achieved remark...
research
07/14/2022

Inertial Hallucinations – When Wearable Inertial Devices Start Seeing Things

We propose a novel approach to multimodal sensor fusion for Ambient Assi...
research
09/01/2020

Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition

Existing vision-based action recognition is susceptible to occlusion and...
research
02/23/2022

Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion Network for Action Recognition

Action recognition has been a heated topic in computer vision for its wi...
research
06/23/2023

Learning Scene Flow With Skeleton Guidance For 3D Action Recognition

Among the existing modalities for 3D action recognition, 3D flow has bee...
research
03/22/2021

AdaSGN: Adapting Joint Number and Model Size for Efficient Skeleton-Based Action Recognition

Existing methods for skeleton-based action recognition mainly focus on i...
research
07/10/2023

An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification

Wearable sensors such as Inertial Measurement Units (IMUs) are often use...

Please sign up or login with your details

Forgot password? Click here to reset