MOVIN: Real-time Motion Capture using a Single LiDAR

09/17/2023
by   Deok-Kyeong Jang, et al.
0

Recent advancements in technology have brought forth new forms of interactive applications, such as the social metaverse, where end users interact with each other through their virtual avatars. In such applications, precise full-body tracking is essential for an immersive experience and a sense of embodiment with the virtual avatar. However, current motion capture systems are not easily accessible to end users due to their high cost, the requirement for special skills to operate them, or the discomfort associated with wearable devices. In this paper, we present MOVIN, the data-driven generative method for real-time motion capture with global tracking, using a single LiDAR sensor. Our autoregressive conditional variational autoencoder (CVAE) model learns the distribution of pose variations conditioned on the given 3D point cloud from LiDAR.As a central factor for high-accuracy motion capture, we propose a novel feature encoder to learn the correlation between the historical 3D point cloud data and global, local pose features, resulting in effective learning of the pose prior. Global pose features include root translation, rotation, and foot contacts, while local features comprise joint positions and rotations. Subsequently, a pose generator takes into account the sampled latent variable along with the features from the previous frame to generate a plausible current pose. Our framework accurately predicts the performer's 3D global information and local joint details while effectively considering temporally coherent movements across frames. We demonstrate the effectiveness of our architecture through quantitative and qualitative evaluations, comparing it against state-of-the-art methods. Additionally, we implement a real-time application to showcase our method in real-world scenarios. MOVIN dataset is available at <https://movin3d.github.io/movin_pg2023/>.

READ FULL TEXT

page 1

page 3

page 4

page 7

page 8

page 12

research
05/30/2022

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

We propose a multi-sensor fusion method for capturing challenging 3D hum...
research
07/27/2022

AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing

Today's Mixed Reality head-mounted displays track the user's head pose i...
research
03/28/2022

LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR Point Clouds

Existing motion capture datasets are largely short-range and cannot yet ...
research
09/10/2020

Orientation Keypoints for 6D Human Pose Estimation

Most realtime human pose estimation approaches are based on detecting jo...
research
12/01/2020

We are More than Our Joints: Predicting how 3D Bodies Move

A key step towards understanding human behavior is the prediction of 3D ...
research
09/02/2022

WOC: A Handy Webcam-based 3D Online Chatroom

We develop WOC, a webcam-based 3D virtual online chatroom for multi-pers...
research
10/27/2022

Learning Variational Motion Prior for Video-based Motion Capture

Motion capture from a monocular video is fundamental and crucial for us ...

Please sign up or login with your details

Forgot password? Click here to reset