Multi-Plane Neural Radiance Fields for Novel View Synthesis

03/03/2023
by   Youssef Abdelkareem, et al.
0

Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints. Volumetric approaches provide a solution for modeling occlusions through the explicit 3D representation of the camera frustum. Multi-plane Images (MPI) are volumetric methods that represent the scene using front-parallel planes at distinct depths but suffer from depth discretization leading to a 2.D scene representation. Another line of approach relies on implicit 3D scene representations. Neural Radiance Fields (NeRF) utilize neural networks for encapsulating the continuous 3D scene structure within the network weights achieving photorealistic synthesis results, however, methods are constrained to per-scene optimization settings which are inefficient in practice. Multi-plane Neural Radiance Fields (MINE) open the door for combining implicit and explicit scene representations. It enables continuous 3D scene representations, especially in the depth dimension, while utilizing the input image features to avoid per-scene optimization. The main drawback of the current literature work in this domain is being constrained to single-view input, limiting the synthesis ability to narrow viewpoint ranges. In this work, we thoroughly examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields. In addition, we propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range. Features from the input source frames are effectively fused through a proposed attention-aware fusion module to highlight important information from different viewpoints. Experiments show the effectiveness of attention-based fusion and the promising outcomes of our proposed method when compared to multi-view NeRF and MPI techniques.

READ FULL TEXT

page 9

page 10

page 12

research
09/20/2023

GenLayNeRF: Generalizable Layered Representations with 3D Model Alignment for Multi-Human View Synthesis

Novel view synthesis (NVS) of multi-human scenes imposes challenges due ...
research
05/30/2022

Neural Volumetric Object Selection

We introduce an approach for selecting objects in neural volumetric 3D r...
research
02/12/2020

Learning light field synthesis with Multi-Plane Images: scene encoding as a recurrent segmentation task

In this paper we address the problem of view synthesis from large baseli...
research
03/31/2023

Efficient View Synthesis and 3D-based Multi-Frame Denoising with Multiplane Feature Representations

While current multi-frame restoration methods combine information from m...
research
04/30/2023

Neural Radiance Fields (NeRFs): A Review and Some Recent Developments

Neural Radiance Field (NeRF) is a framework that represents a 3D scene i...
research
05/26/2023

PlaNeRF: SVD Unsupervised 3D Plane Regularization for NeRF Large-Scale Scene Reconstruction

Neural Radiance Fields (NeRF) enable 3D scene reconstruction from 2D ima...
research
03/08/2023

CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation

Beyond novel view synthesis, Neural Radiance Fields are useful for appli...

Please sign up or login with your details

Forgot password? Click here to reset