Scale-Consistent Fusion: from Heterogeneous Local Sampling to Global Immersive Rendering

06/17/2021
by   Wenpeng Xing, et al.
4

Image-based geometric modeling and novel view synthesis based on sparse, large-baseline samplings are challenging but important tasks for emerging multimedia applications such as virtual reality and immersive telepresence. Existing methods fail to produce satisfactory results due to the limitation on inferring reliable depth information over such challenging reference conditions. With the popularization of commercial light field (LF) cameras, capturing LF images (LFIs) is as convenient as taking regular photos, and geometry information can be reliably inferred. This inspires us to use a sparse set of LF captures to render high-quality novel views globally. However, fusion of LF captures from multiple angles is challenging due to the scale inconsistency caused by various capture settings. To overcome this challenge, we propose a novel scale-consistent volume rescaling algorithm that robustly aligns the disparity probability volumes (DPV) among different captures for scale-consistent global geometry fusion. Based on the fused DPV projected to the target camera frustum, novel learning-based modules have been proposed (i.e., the attention-guided multi-scale residual fusion module, and the disparity field guided deep re-regularization module) which comprehensively regularize noisy observations from heterogeneous captures for high-quality rendering of novel LFIs. Both quantitative and qualitative experiments over the Stanford Lytro Multi-view LF dataset show that the proposed method outperforms state-of-the-art methods significantly under different experiment settings for disparity inference and LF synthesis.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 9

page 10

page 11

research
09/20/2023

GL-Fusion: Global-Local Fusion Network for Multi-view Echocardiogram Video Segmentation

Cardiac structure segmentation from echocardiogram videos plays a crucia...
research
04/07/2022

ProbNVS: Fast Novel View Synthesis with Learned Probability-Guided Sampling

Existing state-of-the-art novel view synthesis methods rely on either fa...
research
08/14/2022

HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields

We propose high dynamic range (HDR) radiance fields, HDR-Plenoxels, that...
research
12/08/2021

Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural Human Rendering

In this work we develop a generalizable and efficient Neural Radiance Fi...
research
07/27/2023

Improved Neural Radiance Fields Using Pseudo-depth and Fusion

Since the advent of Neural Radiance Fields, novel view synthesis has rec...
research
05/26/2020

SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis

Multi-view stereopsis (MVS) tries to recover the 3D model from 2D images...
research
10/02/2019

Deep 3D Pan via adaptive "t-shaped" convolutions with global and local adaptive dilations

Recent advances in deep learning have shown promising results in many lo...

Please sign up or login with your details

Forgot password? Click here to reset