Capturing and Animation of Body and Clothing from Monocular Video

10/04/2022
by   Yao Feng, et al.
0

While recent work has shown progress on extracting clothed 3D human avatars from a single image, video, or a set of 3D scans, several limitations remain. Most methods use a holistic representation to jointly model the body and clothing, which means that the clothing and body cannot be separated for applications like virtual try-on. Other methods separately model the body and clothing, but they require training from a large set of 3D clothed human meshes obtained from 3D/4D scanners or physics simulations. Our insight is that the body and clothing have different modeling requirements. While the body is well represented by a mesh-based parametric 3D model, implicit representations and neural radiance fields are better suited to capturing the large variety in shape and appearance present in clothing. Building on this insight, we propose SCARF (Segmented Clothed Avatar Radiance Field), a hybrid model combining a mesh-based body with a neural radiance field. Integrating the mesh into the volumetric rendering in combination with a differentiable rasterizer enables us to optimize SCARF directly from monocular videos, without any 3D supervision. The hybrid modeling enables SCARF to (i) animate the clothed body avatar by changing body poses (including hand articulation and facial expressions), (ii) synthesize novel views of the avatar, and (iii) transfer clothing between avatars in virtual try-on applications. We demonstrate that SCARF reconstructs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects. The code and models are available at https://github.com/YadiraF/SCARF.

READ FULL TEXT

page 1

page 6

page 10

page 11

research
09/12/2023

Learning Disentangled Avatars with Hybrid 3D Representations

Tremendous efforts have been made to learn animatable and photorealistic...
research
03/21/2023

3D Human Mesh Estimation from Virtual Markers

Inspired by the success of volumetric 3D pose estimation, some recent hu...
research
04/01/2020

BCNet: Learning Body and Cloth Shape from A Single Image

In this paper, we consider the problem to automatically reconstruct garm...
research
09/09/2020

Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On

We present a learning-based approach for virtual try-on applications bas...
research
04/23/2023

KBody: Towards general, robust, and aligned monocular whole-body estimation

KBody is a method for fitting a low-dimensional body model to an image. ...
research
11/29/2021

Human Performance Capture from Monocular Video in the Wild

Capturing the dynamically deforming 3D shape of clothed human is essenti...
research
09/30/2021

HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge

This paper presents a new large multiview dataset called HUMBI for human...

Please sign up or login with your details

Forgot password? Click here to reset