Vision Transformer for NeRF-Based View Synthesis from a Single Input Image

07/12/2022
by   Kai-En Lin, et al.
0

Although neural radiance fields (NeRF) have shown impressive advances for novel view synthesis, most methods typically require multiple input images of the same scene with accurate camera poses. In this work, we seek to substantially reduce the inputs to a single unposed image. Existing approaches condition on local image features to reconstruct a 3D object, but often render blurry predictions at viewpoints that are far away from the source view. To address this issue, we propose to leverage both the global and local features to form an expressive 3D representation. The global features are learned from a vision transformer, while the local features are extracted from a 2D convolutional network. To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering. This novel 3D representation allows the network to reconstruct unseen regions without enforcing constraints like symmetry or canonical coordinate systems. Our method can render novel views from only a single input image and generalize across multiple object categories using a single model. Quantitative and qualitative evaluations demonstrate that the proposed method achieves state-of-the-art performance and renders richer details than existing approaches.

READ FULL TEXT

page 2

page 6

page 10

page 11

page 13

page 18

page 26

page 28

research
07/18/2023

PixelHuman: Animatable Neural Radiance Fields from Few Images

In this paper, we propose PixelHuman, a novel human rendering model that...
research
09/08/2022

im2nerf: Image to Neural Radiance Field in the Wild

We propose im2nerf, a learning framework that predicts a continuous neur...
research
03/22/2023

SHERF: Generalizable Human NeRF from a Single Image

Existing Human NeRF methods for reconstructing 3D humans typically rely ...
research
05/03/2023

Real-Time Radiance Fields for Single-Image Portrait View Synthesis

We present a one-shot method to infer and render a photorealistic 3D rep...
research
04/24/2023

Explicit Correspondence Matching for Generalizable Neural Radiance Fields

We present a new generalizable NeRF method that is able to directly gene...
research
03/08/2017

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

We present a transformation-grounded image generation network for novel ...
research
02/20/2023

NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion

Novel view synthesis from a single image requires inferring occluded reg...

Please sign up or login with your details

Forgot password? Click here to reset