Yet it moves: Learning from Generic Motions to Generate IMU data from YouTube videos
Human activity recognition (HAR) using wearable sensors has benefited much less from recent advances in Machine Learning than fields such as computer vision and natural language processing. This is to a large extent due to the lack of large scale repositories of labeled training data. In our research we aim to facilitate the use of online videos, which exists in ample quantity for most activities and are much easier to label than sensor data, to simulate labeled wearable motion sensor data. In previous work we already demonstrate some preliminary results in this direction focusing on very simple, activity specific simulation models and a single sensor modality (acceleration norm)<cit.>. In this paper we show how we can train a regression model on generic motions for both accelerometer and gyro signals and then apply it to videos of the target activities to generate synthetic IMU data (acceleration and gyro norms) that can be used to train and/or improve HAR models. We demonstrate that systems trained on simulated data generated by our regression model can come to within around 10 trained on real sensor data. Furthermore we show that by either including a small amount of real sensor data for model calibration or simply leveraging the fact that (in general) we can easily generate much more simulated data from video than we can collect in terms of real sensor data the advantage of real sensor data can be eventually equalized.
READ FULL TEXT