Learning Generalizable Light Field Networks from Few Images

07/24/2022
by   Qian Li, et al.
0

We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel's color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a 100 times faster rendering.

READ FULL TEXT

page 1

page 11

page 13

page 20

page 21

research
12/02/2021

Learning Neural Light Fields with Ray-Space Embedding Networks

Neural radiance fields (NeRFs) produce state-of-the-art view synthesis r...
research
12/31/2021

InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering

We present an information-theoretic regularization technique for few-sho...
research
10/13/2017

Single-image Tomography: 3D Volumes from 2D X-Rays

As many different 3D volumes could produce the same 2D x-ray image, inve...
research
03/31/2022

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

Recent research explosion on Neural Radiance Field (NeRF) shows the enco...
research
12/20/2017

Light Field Segmentation From Super-pixel Graph Representation

Efficient and accurate segmentation of light field is an important task ...
research
05/28/2022

V4D: Voxel for 4D Novel View Synthesis

Neural radiance fields have made a remarkable breakthrough in the novel ...
research
07/30/2022

MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures

Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synt...

Please sign up or login with your details

Forgot password? Click here to reset