More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced Modality of Wearable Sensors

02/16/2022
by   Huiyuan Yang, et al.
0

Accurately recognizing health-related conditions from wearable data is crucial for improved healthcare outcomes. To improve the recognition accuracy, various approaches have focused on how to effectively fuse information from multiple sensors. Fusing multiple sensors is a common scenario in many applications, but may not always be feasible in real-world scenarios. For example, although combining bio-signals from multiple sensors (i.e., a chest pad sensor and a wrist wearable sensor) has been proved effective for improved performance, wearing multiple devices might be impractical in the free-living context. To solve the challenges, we propose an effective more to less (M2L) learning framework to improve testing performance with reduced sensors through leveraging the complementary information of multiple modalities during training. More specifically, different sensors may carry different but complementary information, and our model is designed to enforce collaborations among different modalities, where positive knowledge transfer is encouraged and negative knowledge transfer is suppressed, so that better representation is learned for individual modalities. Our experimental results show that our framework achieves comparable performance when compared with the full modalities. Our code and results will be available at https://github.com/compwell-org/More2Less.git.

READ FULL TEXT
research
09/01/2020

Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision Action Recognition

Existing vision-based action recognition is susceptible to occlusion and...
research
08/28/2019

Online Sensor Hallucination via Knowledge Distillation for Multimodal Image Classification

We deal with the problem of information fusion driven satellite image/sc...
research
07/22/2022

Optimization of Forcemyography Sensor Placement for Arm Movement Recognition

How to design an optimal wearable device for human movement recognition ...
research
12/01/2020

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors

Using sensor data from multiple modalities presents an opportunity to en...
research
12/14/2018

Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition with Multimodal Training

We present an efficient approach for leveraging the knowledge from multi...
research
07/14/2022

Inertial Hallucinations – When Wearable Inertial Devices Start Seeing Things

We propose a novel approach to multimodal sensor fusion for Ambient Assi...
research
07/22/2021

Privileged Information for Modeling Affect In The Wild

A key challenge of affective computing research is discovering ways to r...

Please sign up or login with your details

Forgot password? Click here to reset