Selecting the motion ground truth for loose-fitting wearables: benchmarking optical MoCap methods

07/21/2023
by   Lala Shakti Swarup Ray, et al.
0

To help smart wearable researchers choose the optimal ground truth methods for motion capturing (MoCap) for all types of loose garments, we present a benchmark, DrapeMoCapBench (DMCB), specifically designed to evaluate the performance of optical marker-based and marker-less MoCap. High-cost marker-based MoCap systems are well-known as precise golden standards. However, a less well-known caveat is that they require skin-tight fitting markers on bony areas to ensure the specified precision, making them questionable for loose garments. On the other hand, marker-less MoCap methods powered by computer vision models have matured over the years, which have meager costs as smartphone cameras would suffice. To this end, DMCB uses large real-world recorded MoCap datasets to perform parallel 3D physics simulations with a wide range of diversities: six levels of drape from skin-tight to extremely draped garments, three levels of motions and six body type - gender combinations to benchmark state-of-the-art optical marker-based and marker-less MoCap methods to identify the best-performing method in different scenarios. In assessing the performance of marker-based and low-cost marker-less MoCap for casual loose garments both approaches exhibit significant performance loss (>10cm), but for everyday activities involving basic and fast motions, marker-less MoCap slightly outperforms marker-based MoCap, making it a favorable and cost-effective choice for wearable studies.

READ FULL TEXT

page 2

page 3

page 4

research
01/05/2019

The Oxford Multimotion Dataset: Multiple SE(3) Motions with Ground Truth

Datasets advance research by posing challenging new problems and providi...
research
02/08/2017

Guided Optical Flow Learning

We study the unsupervised learning of CNNs for optical flow estimation u...
research
09/17/2020

Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy

Event cameras are novel vision sensors that sample, in an asynchronous f...
research
06/29/2023

BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion

We show, for the first time, that neural networks trained only on synthe...
research
01/27/2021

DeepOIS: Gyroscope-Guided Deep Optical Image Stabilizer Compensation

Mobile captured images can be aligned using their gyroscope sensors. Opt...
research
06/18/2021

Reliability and Validity of Image-Based and Self-Reported Skin Phenotype Metrics

With increasing adoption of face recognition systems, it is important to...

Please sign up or login with your details

Forgot password? Click here to reset