Learning Disentangled Avatars with Hybrid 3D Representations

09/12/2023
by   Yao Feng, et al.
0

Tremendous efforts have been made to learn animatable and photorealistic human avatars. Towards this end, both explicit and implicit 3D representations are heavily studied for a holistic modeling and capture of the whole human (e.g., body, clothing, face and hair), but neither representation is an optimal choice in terms of representation efficacy since different parts of the human avatar have different modeling desiderata. For example, meshes are generally not suitable for modeling clothing and hair. Motivated by this, we present Disentangled Avatars (DELTA), which models humans with hybrid explicit-implicit 3D representations. DELTA takes a monocular RGB video as input, and produces a human avatar with separate body and clothing/hair layers. Specifically, we demonstrate two important applications for DELTA. For the first one, we consider the disentanglement of the human body and clothing and in the second, we disentangle the face and hair. To do so, DELTA represents the body or face with an explicit mesh-based parametric 3D model and the clothing or hair with an implicit neural radiance field. To make this possible, we design an end-to-end differentiable renderer that integrates meshes into volumetric rendering, enabling DELTA to learn directly from monocular videos without any 3D supervision. Finally, we show that how these two applications can be easily combined to model full-body avatars, such that the hair, face, body and clothing can be fully disentangled yet jointly rendered. Such a disentanglement enables hair and clothing transfer to arbitrary body shapes. We empirically validate the effectiveness of DELTA's disentanglement by demonstrating its promising performance on disentangled reconstruction, virtual clothing try-on and hairstyle transfer. To facilitate future research, we also release an open-sourced pipeline for the study of hybrid human avatar modeling.

READ FULL TEXT

page 1

page 5

page 6

page 9

page 10

page 11

page 12

page 13

research
10/04/2022

Capturing and Animation of Body and Clothing from Monocular Video

While recent work has shown progress on extracting clothed 3D human avat...
research
06/06/2023

Human 3D Avatar Modeling with Implicit Neural Representation: A Brief Survey

A human 3D avatar is one of the important elements in the metaverse, and...
research
11/30/2021

LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies

3D representation and reconstruction of human bodies have been studied f...
research
10/16/2022

Realistic, Animatable Human Reconstructions for Virtual Fit-On

We present an end-to-end virtual try-on pipeline, that can fit different...
research
11/21/2022

An Implicit Parametric Morphable Dental Model

3D Morphable models of the human body capture variations among subjects ...
research
05/03/2022

DANBO: Disentangled Articulated Neural Body Representations via Graph Neural Networks

Deep learning greatly improved the realism of animatable human models by...
research
08/28/2023

NSF: Neural Surface Fields for Human Modeling from Monocular Depth

Obtaining personalized 3D animatable avatars from a monocular camera has...

Please sign up or login with your details

Forgot password? Click here to reset