MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video

09/22/2020
by   Donglai Xiang, et al.
2

We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input. In contrast to the existing literature, our method does not require a pre-scanned personalized mesh template, and thus can be applied to in-the-wild videos. To constrain the output to a valid deformation space, we build statistical deformation models for three types of clothing: T-shirt, short pants and long pants. A differentiable renderer is utilized to align our captured shapes to the input frames by minimizing the difference in both silhouette, segmentation, and texture. We develop a UV texture growing method which expands the visible texture region of the clothing sequentially in order to minimize drift in deformation tracking. We also extract fine-grained wrinkle detail from the input videos by fitting the clothed surface to the normal maps estimated by a convolutional neural network. Our method produces temporally coherent reconstruction of body and clothing from monocular video. We demonstrate successful clothing capture results from a variety of challenging videos. Extensive quantitative experiments demonstrate the effectiveness of our method on metrics including body pose error and surface reconstruction error of the clothing.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 13

page 14

page 15

page 16

research
12/04/2018

Monocular Total Capture: Posing Face, Body, and Hands in the Wild

We present the first method to capture the 3D total motion of a target p...
research
04/08/2021

Dynamic Surface Function Networks for Clothed Human Bodies

We present a novel method for temporal coherent reconstruction and track...
research
07/06/2021

NRST: Non-rigid Surface Tracking from Monocular Video

We propose an efficient method for non-rigid surface tracking from monoc...
research
03/17/2023

MoRF: Mobile Realistic Fullbody Avatars from a Monocular Video

We present a new approach for learning Mobile Realistic Fullbody (MoRF) ...
research
10/22/2022

NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos

We present a method for learning 3D geometry and physics parameters of a...
research
03/22/2022

φ-SfT: Shape-from-Template with a Physics-Based Deformation Model

Shape-from-Template (SfT) methods estimate 3D surface deformations from ...
research
07/07/2018

Representing a Partially Observed Non-Rigid 3D Human Using Eigen-Texture and Eigen-Deformation

Reconstruction of the shape and motion of humans from RGB-D is a challen...

Please sign up or login with your details

Forgot password? Click here to reset