Onboard View Planning of a Flying Camera for High Fidelity 3D Reconstruction of a Moving Actor

07/31/2023
by   Qingyuan Jiang, et al.
0

Capturing and reconstructing a human actor's motion is important for filmmaking and gaming. Currently, motion capture systems with static cameras are used for pixel-level high-fidelity reconstructions. Such setups are costly, require installation and calibration and, more importantly, confine the user to a predetermined area. In this work, we present a drone-based motion capture system that can alleviate these limitations. We present a complete system implementation and study view planning which is critical for achieving high-quality reconstructions. The main challenge for view planning for a drone-based capture system is that it needs to be performed during motion capture. To address this challenge, we introduce simple geometric primitives and show that they can be used for view planning. Specifically, we introduce Pixel-Per-Area (PPA) as a reconstruction quality proxy and plan views by maximizing the PPA of the faces of a simple geometric shape representing the actor. Through experiments in simulation, we show that PPA is highly correlated with reconstruction quality. We also conduct real-world experiments showing that our system can produce dynamic 3D reconstructions of good quality. We share our code for the simulation experiments in the link: https://github.com/Qingyuan-Jiang/view_planning_3dhuman

READ FULL TEXT

page 1

page 5

page 6

research
05/10/2023

HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion

Representing human performance at high-fidelity is an essential building...
research
04/17/2018

Human Motion Capture Using a Drone

Current motion capture (MoCap) systems generally require markers and mul...
research
07/11/2023

Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction

UAV-based intelligent data acquisition for 3D reconstruction and monitor...
research
04/24/2023

Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis

We explore the task of embodied view synthesis from monocular videos of ...
research
05/01/2021

DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras

We propose DeepMultiCap, a novel method for multi-person performance cap...
research
01/23/2023

Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR

Recent technological advances are enabling HCI researchers to explore in...
research
05/20/2021

M4Depth: A motion-based approach for monocular depth estimation on video sequences

Getting the distance to objects is crucial for autonomous vehicles. In i...

Please sign up or login with your details

Forgot password? Click here to reset