Drivable Volumetric Avatars using Texel-Aligned Features

by   Edoardo Remelli, et al.

Photorealistic telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance that is indistinguishable from reality. In this work, we propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people. One challenge is driving an avatar while staying faithful to details and dynamics that cannot be captured by a global low-dimensional parameterization such as body pose. Our approach supports driving of clothed avatars with wrinkles and motion that a real driving performer exhibits beyond the training corpus. Unlike existing global state representations or non-parametric screen-space approaches, we introduce texel-aligned features – a localised representation which can leverage both the structural prior of a skeleton-based parametric model and observed sparse image signals at the same time. Another challenge is modeling a temporally coherent clothed avatar, which typically requires precise surface tracking. To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects. By explicitly incorporating articulation, our approach naturally generalizes to unseen poses. We also introduce a localized viewpoint conditioning, which leads to a large improvement in generalization of view-dependent appearance. The proposed volumetric representation does not require high-quality mesh tracking as a prerequisite and brings significant quality improvements compared to mesh-based counterparts. In our experiments, we carefully examine our design choices and demonstrate the efficacy of our approach, outperforming the state-of-the-art methods on challenging driving scenarios.


page 1

page 3

page 6

page 8


KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints

Image-based volumetric avatars using pixel-aligned features promise gene...

PersonNeRF: Personalized Reconstruction from Photo Collections

We present PersonNeRF, a method that takes a collection of photos of a s...

Driving-Signal Aware Full-Body Avatars

We present a learning-based method for building driving-signal aware ful...

Garment Avatars: Realistic Cloth Driving using Pattern Registration

Virtual telepresence is the future of online communication. Clothing is ...

EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes

A unique challenge in creating high-quality animatable and relightable 3...

Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth

We present a novel learning framework for cloth deformation by embedding...

Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

In motion analysis and understanding it is important to be able to fit a...

Please sign up or login with your details

Forgot password? Click here to reset