Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition

09/01/2020
by   Yang Liu, et al.
11

Existing vision-based action recognition is susceptible to occlusion and appearance variations, while wearable sensors can alleviate these challenges by capturing human motion with one-dimensional time-series signals (e.g. acceleration, gyroscope and orientation). For the same action, the knowledge learned from vision sensors (videos or images) and wearable sensors, may be related and complementary. However, there exists significantly large modality difference between action data captured by wearable-sensor and vision-sensor in data dimension, data distribution and inherent information content. In this paper, we propose a novel framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos) by adaptively transferring and distilling the knowledge from multiple wearable sensors. The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality. Specifically, we transform one-dimensional time-series signals of wearable sensors to two-dimensional images by designing a gramian angular field based virtual image generation model. Then, we build a novel Similarity-Preserving Adaptive Multi-modal Fusion Module (SPAMFM) to adaptively fuse intermediate representation knowledge from different teacher networks. To fully exploit and transfer the knowledge of multiple well-trained teacher networks to the student network, we propose a novel Graph-guided Semantically Discriminative Mapping (GSDM) loss, which utilizes graph-guided ablation analysis to produce a good visual explanation highlighting the important regions across modalities and concurrently preserving the interrelations of original data. Experimental results on Berkeley-MHAD, UTD-MHAD and MMAct datasets well demonstrate the effectiveness of our proposed SAKDN.

READ FULL TEXT

page 1

page 7

research
08/17/2022

Progressive Cross-modal Knowledge Distillation for Human Action Recognition

Wearable sensor-based Human Action Recognition (HAR) has achieved remark...
research
02/16/2022

More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced Modality of Wearable Sensors

Accurately recognizing health-related conditions from wearable data is c...
research
10/10/2019

Cross-modal knowledge distillation for action recognition

In this work, we address the problem how a network for action recognitio...
research
07/07/2023

Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition

Wearable sensor-based Human Action Recognition (HAR) has made significan...
research
09/21/2023

Elevating Skeleton-Based Action Recognition with Efficient Multi-Modality Self-Supervision

Self-supervised representation learning for human action recognition has...
research
08/28/2019

Online Sensor Hallucination via Knowledge Distillation for Multimodal Image Classification

We deal with the problem of information fusion driven satellite image/sc...
research
11/01/2022

No-audio speaking status detection in crowded settings via visual pose-based filtering and wearable acceleration

Recognizing who is speaking in a crowded scene is a key challenge toward...

Please sign up or login with your details

Forgot password? Click here to reset