You Only Train Once: Multi-Identity Free-Viewpoint Neural Human Rendering from Monocular Videos

03/10/2023
by   Jaehyeok Kim, et al.
3

We introduce You Only Train Once (YOTO), a dynamic human generation framework, which performs free-viewpoint rendering of different human identities with distinct motions, via only one-time training from monocular videos. Most prior works for the task require individualized optimization for each input video that contains a distinct human identity, leading to a significant amount of time and resources for the deployment, thereby impeding the scalability and the overall application potential of the system. In this paper, we tackle this problem by proposing a set of learnable identity codes to expand the capability of the framework for multi-identity free-viewpoint rendering, and an effective pose-conditioned code query mechanism to finely model the pose-dependent non-rigid motions. YOTO optimizes neural radiance fields (NeRF) by utilizing designed identity codes to condition the model for learning various canonical T-pose appearances in a single shared volumetric representation. Besides, our joint learning of multiple identities within a unified model incidentally enables flexible motion transfer in high-quality photo-realistic renderings for all learned appearances. This capability expands its potential use in important applications, including Virtual Reality. We present extensive experimental results on ZJU-MoCap and PeopleSnapshot to clearly demonstrate the effectiveness of our proposed model. YOTO shows state-of-the-art performance on all evaluation metrics while showing significant benefits in training and inference efficiency as well as rendering quality. The code and model will be made publicly available soon.

READ FULL TEXT

page 1

page 7

page 8

research
01/11/2022

HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video

We introduce a free-viewpoint rendering method – HumanNeRF – that works ...
research
04/24/2023

HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video

We introduce HOSNeRF, a novel 360 free-viewpoint rendering method that r...
research
03/25/2023

FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views

We present FlexNeRF, a method for photorealistic freeviewpoint rendering...
research
12/19/2021

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars

We propose a novel neural rendering pipeline, Hybrid Volumetric-Textural...
research
06/28/2023

Envisioning a Next Generation Extended Reality Conferencing System with Efficient Photorealistic Human Rendering

Meeting online is becoming the new normal. Creating an immersive experie...
research
04/03/2023

Bringing Telepresence to Every Desk

In this paper, we work to bring telepresence to every desktop. Unlike co...
research
08/31/2023

GHuNeRF: Generalizable Human NeRF from a Monocular Video

In this paper, we tackle the challenging task of learning a generalizabl...

Please sign up or login with your details

Forgot password? Click here to reset