Inertial Sensor Data To Image Encoding For Human Action Recognition

05/28/2021
by   Zeeshan Ahmad, et al.
0

Convolutional Neural Networks (CNNs) are successful deep learning models in the field of computer vision. To get the maximum advantage of CNN model for Human Action Recognition (HAR) using inertial sensor data, in this paper, we use 4 types of spatial domain methods for transforming inertial sensor data to activity images, which are then utilized in a novel fusion framework. These four types of activity images are Signal Images (SI), Gramian Angular Field (GAF) Images, Markov Transition Field (MTF) Images and Recurrence Plot (RP) Images. Furthermore, for creating a multimodal fusion framework and to exploit activity image, we made each type of activity images multimodal by convolving with two spatial domain filters : Prewitt filter and High-boost filter. Resnet-18, a CNN model, is used to learn deep features from multi-modalities. Learned features are extracted from the last pooling layer of each ReNet and then fused by canonical correlation based fusion (CCF) for improving the accuracy of human action recognition. These highly informative features are served as input to a multiclass Support Vector Machine (SVM). Experimental results on three publicly available inertial datasets show the superiority of the proposed method over the current state-of-the-art.

READ FULL TEXT

page 5

page 9

research
08/22/2020

Towards Improved Human Action Recognition Using Convolutional Neural Networks and Multimodal Fusion of Depth and Inertial Sensor Data

This paper attempts at improving the accuracy of Human Action Recognitio...
research
10/25/2019

Human Action Recognition Using Deep Multilevel Multimodal (M2) Fusion of Depth and Inertial Sensors

Multimodal fusion frameworks for Human Action Recognition (HAR) using de...
research
03/16/2021

Interpretable Deep Learning for the Remote Characterisation of Ambulation in Multiple Sclerosis using Smartphones

The emergence of digital technologies such as smartphones in healthcare ...
research
03/13/2020

Gimme Signals: Discriminative signal encoding for multimodal activity recognition

We present a simple, yet effective and flexible method for action recogn...
research
10/29/2020

CNN based Multistage Gated Average Fusion (MGAF) for Human Action Recognition Using Depth and Inertial Sensors

Convolutional Neural Network (CNN) provides leverage to extract and fuse...
research
08/22/2020

Multidomain Multimodal Fusion For Human Action Recognition Using Inertial Sensors

One of the major reasons for misclassification of multiplex actions duri...
research
07/21/2021

ECG Heartbeat Classification Using Multimodal Fusion

Electrocardiogram (ECG) is an authoritative source to diagnose and count...

Please sign up or login with your details

Forgot password? Click here to reset