MEx: Multi-modal Exercises Dataset for Human Activity Recognition

08/13/2019
by   Anjana Wijekoon, et al.
0

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD). We select 7 exercises regularly recommended for MSD patients by physiotherapists and collected data with four sensors a pressure mat, a depth camera and two accelerometers. The dataset contains three data modalities; numerical time-series data, video data and pressure sensor data posing interesting research challenges when reasoning for HAR and Exercise Quality Assessment. This paper presents our evaluation of the dataset on number of standard classification algorithms for the HAR task by comparing different feature representation algorithms for each sensor. These results set a reference performance for each individual sensor that expose their strengths and weaknesses for the future tasks. In addition we visualise pressure mat data to explore the potential of the sensor to capture exercise performance quality. With the recent advancement in multi-modal fusion, we also believe MEx is a suitable dataset to benchmark not only HAR algorithms, but also fusion algorithms of heterogeneous data types in multiple application domains.

READ FULL TEXT

page 6

page 10

page 11

research
01/26/2023

Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

With the rapid development of wearable cameras, a massive collection of ...
research
06/10/2019

Commuting Conditional GANs for Robust Multi-Modal Fusion

This paper presents a data driven approach to multi-modal fusion, where ...
research
05/05/2023

Distilled Mid-Fusion Transformer Networks for Multi-Modal Human Activity Recognition

Human Activity Recognition is an important task in many human-computer c...
research
11/08/2022

Multi-Stage Based Feature Fusion of Multi-Modal Data for Human Activity Recognition

To properly assist humans in their needs, human activity recognition (HA...
research
08/20/2019

Multi-Modal Recognition of Worker Activity for Human-Centered Intelligent Manufacturing

In a human-centered intelligent manufacturing system, sensing and unders...
research
02/15/2022

Multi-Modal Data Fusion in Enhancing Human-Machine Interaction for Robotic Applications: A Survey

Human-machine interaction has been around for several decades now, with ...
research
07/14/2022

MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images

Multi-modal fusion approaches aim to integrate information from differen...

Please sign up or login with your details

Forgot password? Click here to reset