DeepAI AI Chat
Log In Sign Up

MEx: Multi-modal Exercises Dataset for Human Activity Recognition

by   Anjana Wijekoon, et al.
Robert Gordon University

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD). We select 7 exercises regularly recommended for MSD patients by physiotherapists and collected data with four sensors a pressure mat, a depth camera and two accelerometers. The dataset contains three data modalities; numerical time-series data, video data and pressure sensor data posing interesting research challenges when reasoning for HAR and Exercise Quality Assessment. This paper presents our evaluation of the dataset on number of standard classification algorithms for the HAR task by comparing different feature representation algorithms for each sensor. These results set a reference performance for each individual sensor that expose their strengths and weaknesses for the future tasks. In addition we visualise pressure mat data to explore the potential of the sensor to capture exercise performance quality. With the recent advancement in multi-modal fusion, we also believe MEx is a suitable dataset to benchmark not only HAR algorithms, but also fusion algorithms of heterogeneous data types in multiple application domains.


page 6

page 10

page 11


Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

With the rapid development of wearable cameras, a massive collection of ...

Commuting Conditional GANs for Robust Multi-Modal Fusion

This paper presents a data driven approach to multi-modal fusion, where ...

Multi-Stage Based Feature Fusion of Multi-Modal Data for Human Activity Recognition

To properly assist humans in their needs, human activity recognition (HA...

Multi-Modal Recognition of Worker Activity for Human-Centered Intelligent Manufacturing

In a human-centered intelligent manufacturing system, sensing and unders...

Multi-Modal Data Fusion in Enhancing Human-Machine Interaction for Robotic Applications: A Survey

Human-machine interaction has been around for several decades now, with ...

MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images

Multi-modal fusion approaches aim to integrate information from differen...

Multi-view and Multi-modal Event Detection Utilizing Transformer-based Multi-sensor fusion

We tackle a challenging task: multi-view and multi-modal event detection...