Progressive Cross-modal Knowledge Distillation for Human Action Recognition

08/17/2022
by   Jianyuan Ni, et al.
9

Wearable sensor-based Human Action Recognition (HAR) has achieved remarkable success recently. However, the accuracy performance of wearable sensor-based HAR is still far behind the ones from the visual modalities-based system (i.e., RGB video, skeleton, and depth). Diverse input modalities can provide complementary cues and thus improve the accuracy performance of HAR, but how to take advantage of multi-modal data on wearable sensor-based HAR has rarely been explored. Currently, wearable devices, i.e., smartwatches, can only capture limited kinds of non-visual modality data. This hinders the multi-modal HAR association as it is unable to simultaneously use both visual and non-visual modality data. Another major challenge lies in how to efficiently utilize multimodal data on wearable devices with their limited computation resources. In this work, we propose a novel Progressive Skeleton-to-sensor Knowledge Distillation (PSKD) model which utilizes only time-series data, i.e., accelerometer data, from a smartwatch for solving the wearable sensor-based HAR problem. Specifically, we construct multiple teacher models using data from both teacher (human skeleton sequence) and student (time-series accelerometer data) modalities. In addition, we propose an effective progressive learning scheme to eliminate the performance gap between teacher and student models. We also designed a novel loss function called Adaptive-Confidence Semantic (ACS), to allow the student model to adaptively select either one of the teacher models or the ground-truth label it needs to mimic. To demonstrate the effectiveness of our proposed PSKD method, we conduct extensive experiments on Berkeley-MHAD, UTD-MHAD, and MMAct datasets. The results confirm that the proposed PSKD method has competitive performance compared to the previous mono sensor-based HAR methods.

READ FULL TEXT
research
07/07/2023

Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition

Wearable sensor-based Human Action Recognition (HAR) has made significan...
research
09/01/2020

Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition

Existing vision-based action recognition is susceptible to occlusion and...
research
09/21/2023

Elevating Skeleton-Based Action Recognition with Efficient Multi-Modality Self-Supervision

Self-supervised representation learning for human action recognition has...
research
02/06/2023

Audio Representation Learning by Distilling Video as Privileged Information

Deep audio representation learning using multi-modal audio-visual data o...
research
08/08/2021

Learning an Augmented RGB Representation with Cross-Modal Knowledge Distillation for Action Detection

In video understanding, most cross-modal knowledge distillation (KD) met...
research
10/09/2022

Students taught by multimodal teachers are superior action recognizers

The focal point of egocentric video understanding is modelling hand-obje...
research
04/13/2021

Dealing with Missing Modalities in the Visual Question Answer-Difference Prediction Task through Knowledge Distillation

In this work, we address the issues of missing modalities that have aris...

Please sign up or login with your details

Forgot password? Click here to reset