Multi-modality Sensor Data Classification with Selective Attention

04/16/2018
by   Xiang Zhang, et al.
0

Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex sit- uations where multi-modality sensor data are col- lected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adapt- ability of such classification methods across differ- ent application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learn- ing scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.

READ FULL TEXT
research
01/04/2017

Transforming Sensor Data to the Image Domain for Deep Learning - an Application to Footstep Detection

Convolutional Neural Networks (CNNs) have become the state-of-the-art in...
research
05/17/2018

Interpretable Parallel Recurrent Neural Networks with Convolutional Attentions for Multi-Modality Activity Modeling

Multimodal features play a key role in wearable sensor-based human activ...
research
07/03/2023

Augmenting Deep Learning Adaptation for Wearable Sensor Data through Combined Temporal-Frequency Image Encoding

Deep learning advancements have revolutionized scalable classification i...
research
01/08/2018

Generative Sensing: Transforming Unreliable Sensor Data for Reliable Recognition

This paper introduces a deep learning enabled generative sensing framewo...
research
09/07/2021

Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention

Automatically describing video, or video captioning, has been widely stu...
research
11/21/2017

Fullie and Wiselie: A Dual-Stream Recurrent Convolutional Attention Model for Activity Recognition

Multimodal features play a key role in wearable sensor based Human Activ...
research
10/05/2017

Track Xplorer: A System for Visual Analysis of Sensor-based Motor Activity Predictions

Detecting motor activities from sensor datasets is becoming increasingly...

Please sign up or login with your details

Forgot password? Click here to reset