EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

12/31/2016 ∙ by Helge Rhodin, et al. ∙ 0

Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.

READ FULL TEXT VIEW PDF

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditional optical skeletal motion-capture methods – both marker-based and marker-less – use several cameras typically placed around a scene in an outside-in arrangement, with camera views approximately converging in the center of a confined recording volume. This greatly constrains the spatial extent of motions that can be recorded; simply enlarging the recording volume by using more cameras, for instance to capture an athlete, is not scalable. In other cases, a scene may be cluttered with objects or furniture, or other dynamic scene elements, such as people in close interaction, may obstruct a motion-captured person in the scene or create unwanted dynamics in the background. In such cases, even state-of-the-art outside-in marker-less optical methods that succeed with just a few cameras, and are designed for outdoor scenes [2], quickly fail. This problem can partly be bypassed with motion-capture methods that use body-worn sensors. Shiratori et al. propose to wear 16 cameras placed on body parts facing inside-out [9], and capture the skeletal motion through structure-from-motion relative to the environment. This clever solution requires instrumentation, calibration and a static background, but allows free roaming and was inspirational for our egocentric approach.

We propose EgoCap: an egocentric motion-capture approach that estimates full-body pose from a pair of optical cameras carried by lightweight headgear (see Figure 1). The body-worn cameras are oriented such that their field of view covers the user’s body entirely, forming an arrangement that is independent of external sensors – an optical inside-in method. It reduces the setup effort, enables free roaming, and minimizes body instrumentation. EgoCap decouples the estimation of local body pose with respect to the headgear cameras and global headgear position, which we infer by structure-from-motion on the scene.

Figure 1: We propose a marker-less optical motion-capture approach that only uses two head-mounted fisheye cameras (see rigs on the left). Our approach enables three new application scenarios: (1) capturing human motions in outdoor environments of virtually unlimited size, (2) capturing motions in space-constrained environments, e.g. during social interactions, and (3) rendering the reconstruction of one’s real body in virtual reality for embodied immersion.

Our first contribution is a new egocentric inside-in sensor rig with only two head-mounted, downward-facing commodity video cameras with fisheye lenses (see Figure 1 left). The rig can be attached to a helmet or a head-mounted VR display, and, hence, requires less instrumentation and calibration than other body-worn systems. The stereo fisheye optics keep the whole body in view in all poses, despite the cameras’ proximity to the body.

Figure 2: Dataset augmentation.

Our second contribution is a new marker-less motion-capture algorithm tailored to the strongly distorted egocentric fisheye views. It combines a generative model-based skeletal pose estimation approach with evidence from a trained ConvNet-based body-part detector, and is designed to work with unsegmented frames and general backgrounds (Section 2).

Our third contribution is a new approach for automatically creating body-part detection training datasets. We record test subjects in front of green screen with an existing outside-in marker-less motion-capture system to get ground-truth skeletal poses, which are reprojected into the simultaneously recorded head-mounted fisheye views to get 2D body-part annotations. We augment the training frames by replacing the green screen with random background images, and vary the appearance in terms of color and shading by intrinsic recoloring [4]. With this technique, we annotate 100,000 images of egocentric videos of eight people in different clothing. We provide the dataset for research purposes [1].

2 Egocentric Inside-In Motion Capture

Our egocentric setup separates human motion capture into two subproblems: (1) local skeleton pose estimation with respect to the camera rig, and (2) global rig pose estimation relative to the environment. Global pose is estimated with existing structure-from-motion techniques [5]. We formulate skeletal pose estimation as an analysis-by-synthesis-style optimization problem in the pose parameters , that maximizes the alignment of a projected 3D human body model in the left and the right stereo fisheye views, at each video time step . We use a hybrid alignment energy combining evidence from a generative image-formation model, as well as from a discriminative detection approach:

(1)

is an extension of a generative ray-casting model [7] to the strongly distorted fisheye views, which provides differentiable visibility through a volumetric representation. constrains to 2D joint detections obtained from an exiting ConvNet [3], which was fine-tuned on the previously introduced dataset. penalizes violations of anatomical joint-angle limits as well as poses deviating strongly from the rest pose, and regularizes temporal changes.

3 Evaluation and Applications

Dataset Augmentations.

We first evaluate the learned body-part detectors using the percentage of correct keypoints (PCK) metric [8] on a validation set consisting of 1000 images of two subjects that are not part of the training set. Background augmentation during training brings a clear improvement of 67 PCK points. Cloth recoloring additionally improves significantly by 3 PCK points.

3D Body Pose Accuracy.

We quantitatively evaluate the 3D body pose accuracy of our approach on ground-truth data obtained with the Captury Studio. The average Euclidean 3D distance over all 18 joints, for which detection labels are available, is 71 cm for a challenging 250-frame walking sequence with occlusions, and 71 cm on a long sequence of 750 frames of gesturing and interaction. It meets the accuracy of outside-in approaches using 2–3 cameras [2].

Large-scale Motion Capture.

We successfully tested on a basketball sequence outdoors, which shows quick motion and close interaction, on an outdoor walking sequence, and on a large-scale biking sequence (Figure 1, third column).

Constrained/Crowded Spaces.

We also tested EgoCap for motion capture in a crowded scene, where many spectators are interacting and occluding the tracked user from the outside (Figure 1, fourth column). In such a setting, as well as in settings with many obstacles and narrow sections, outside-in motion capture, even with a dense camera system, would be difficult.

Immersive VR.

The EgoCap head-gear (Figure 1, first column) is designed to be used in virtual reality (VR) applications (Figure 1, last column). Current HMD-based systems only track the pose of the display; our approach adds motion capture of the wearer’s full body, which enables a much higher level of immersion.

4 Conclusion

We presented EgoCap, the first approach for marker-less egocentric full-body motion capture with a head-mounted fisheye stereo rig. EgoCap enables motion capture of dense and crowded scenes, and reconstruction of large-scale activities that would not fit into the constrained recording volumes of outside-in motion-capture methods. It is particularly suited for HMD-based VR applications; two cameras attached to an HMD enable full-body pose reconstruction of your own virtual body to pave the way for immersive VR experiences and interactions.

Acknowledgements

This research was funded by the ERC Starting Grant project CapReal (335545).

References

  • [1] EgoCap dataset. http://gvv.mpi-inf.mpg.de/projects/EgoCap/ (2016)
  • [2] Elhayek, A., de Aguiar, E., Jain, A., Tompson, J., Pishchulin, L., Andriluka, M., Bregler, C., Schiele, B., Theobalt, C.: Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras. In: CVPR (2015)
  • [3] Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., Schiele, B.: DeeperCut: A deeper, stronger, and faster multi-person pose estimation model. In: ECCV (2016)
  • [4] Meka, A., Zollhöfer, M., Richardt, C., Theobalt, C.: Live intrinsic video. ACM Transactions on Graphics 35(4), 109:1–14 (2016)
  • [5] Moulon, P., Monasse, P., Marlet, R.: Global fusion of relative motions for robust, accurate and scalable structure from motion. In: ICCV (2013)
  • [6] Rhodin, H., Richardt, C., Casas, D., Insafutdinov, E., Shafiei, M., Seidel, H.P., Schiele, B., Theobalt, C.: EgoCap: Egocentric marker-less motion capture with two fisheye cameras. ACM Transactions on Graphics (Proceedings SIGGRAPH Asia) 32(8) (2016)
  • [7] Rhodin, H., Robertini, N., Richardt, C., Seidel, H.P., Theobalt, C.: A versatile scene model with differentiable visibility applied to generative pose estimation. In: ICCV (2015)
  • [8] Sapp, B., Taskar, B.: MODEC: Multimodal decomposable models for human pose estimation. In: CVPR (2013)
  • [9] Shiratori, T., Park, H.S., Sigal, L., Sheikh, Y., Hodgins, J.K.: Motion capture from body-mounted cameras. ACM Transactions on Graphics 30(4), 31:1–10 (2011)