DeepSense: A Unified Deep Learning Framework for Time-Series Mobile Sensing Data Processing

11/07/2016
by   Shuochao Yao, et al.
0

Mobile sensing applications usually require time-series inputs from sensors. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, DeepSense is feasible to implement on smartphones due to its moderate energy consumption and low latency

READ FULL TEXT

page 6

page 7

page 8

research
05/24/2022

UMSNet: An Universal Multi-sensor Network for Human Activity Recognition

Human activity recognition (HAR) based on multimodal sensors has become ...
research
06/06/2019

SparseSense: Human Activity Recognition from Highly Sparse Sensor Data-streams Using Set-based Neural Networks

Batteryless or so called passive wearables are providing new and innovat...
research
10/07/2018

Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention

Deep neural networks, including recurrent networks, have been successful...
research
06/06/2020

Attention-Based Deep Learning Framework for Human Activity Recognition with User Adaptation

Sensor-based human activity recognition (HAR) requires to predict the ac...
research
06/17/2021

Multi-Modal Prototype Learning for Interpretable Multivariable Time Series Classification

Multivariable time series classification problems are increasing in prev...
research
03/30/2020

Optimised Convolutional Neural Networks for Heart Rate Estimation and Human Activity Recognition in Wrist Worn Sensing Applications

Wrist-worn smart devices are providing increased insights into human hea...
research
11/07/2016

Learning Time Series Detection Models from Temporally Imprecise Labels

In this paper, we consider a new low-quality label learning problem: lea...

Please sign up or login with your details

Forgot password? Click here to reset