GHuNeRF: Generalizable Human NeRF from a Monocular Video

08/31/2023
by   Chen Li, et al.
0

In this paper, we tackle the challenging task of learning a generalizable human NeRF model from a monocular video. Although existing generalizable human NeRFs have achieved impressive results, they require muti-view images or videos which might not be always available. On the other hand, some works on free-viewpoint rendering of human from monocular videos cannot be generalized to unseen identities. In view of these limitations, we propose GHuNeRF to learn a generalizable human NeRF model from a monocular video of the human performer. We first introduce a visibility-aware aggregation scheme to compute vertex-wise features, which is used to construct a 3D feature volume. The feature volume can only represent the overall geometry of the human performer with insufficient accuracy due to the limited resolution. To solve this, we further enhance the volume feature with temporally aligned point-wise features using an attention mechanism. Finally, the enhanced feature is used for predicting density and color for each sampled point. A surface-guided sampling strategy is also introduced to improve the efficiency for both training and inference. We validate our approach on the widely-used ZJU-MoCap dataset, where we achieve comparable performance with existing multi-view video based approaches. We also test on the monocular People-Snapshot dataset and achieve better performance than existing works when only monocular video is used.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 9

page 10

research
10/04/2022

SelfNeRF: Fast Training NeRF for Human from Monocular Self-rotating Video

In this paper, we propose SelfNeRF, an efficient neural radiance field b...
research
12/28/2022

NeMo: 3D Neural Motion Fields from Multiple Video Instances of the Same Action

The task of reconstructing 3D human motion has wideranging applications....
research
11/30/2020

Adaptive Compact Attention For Few-shot Video-to-video Translation

This paper proposes an adaptive compact attention model for few-shot vid...
research
04/19/2021

Temporal Consistency Loss for High Resolution Textured and Clothed 3DHuman Reconstruction from Monocular Video

We present a novel method to learn temporally consistent 3D reconstructi...
research
08/31/2022

Scatter Points in Space: 3D Detection from Multi-view Monocular Images

3D object detection from monocular image(s) is a challenging and long-st...
research
03/10/2023

You Only Train Once: Multi-Identity Free-Viewpoint Neural Human Rendering from Monocular Videos

We introduce You Only Train Once (YOTO), a dynamic human generation fram...
research
03/17/2023

MoRF: Mobile Realistic Fullbody Avatars from a Monocular Video

We present a new approach for learning Mobile Realistic Fullbody (MoRF) ...

Please sign up or login with your details

Forgot password? Click here to reset