Surface Light Field Compression using a Point Cloud Codec

05/29/2018
by   Xiang Zhang, et al.
0

Light field (LF) representations aim to provide photo-realistic, free-viewpoint viewing experiences. However, the most popular LF representations are images from multiple views. Multi-view image-based representations generally need to restrict the range or degrees of freedom of the viewing experience to what can be interpolated in the image domain, essentially because they lack explicit geometry information. We present a new surface light field (SLF) representation based on explicit geometry, and a method for SLF compression. First, we map the multi-view images of a scene onto a 3D geometric point cloud. The color of each point in the point cloud is a function of viewing direction known as a view map. We represent each view map efficiently in a B-Spline wavelet basis. This representation is capable of modeling diverse surface materials and complex lighting conditions in a highly scalable and adaptive manner. The coefficients of the B-Spline wavelet representation are then compressed spatially. To increase the spatial correlation and thus improve compression efficiency, we introduce a smoothing term to make the coefficients more similar across the 3D space. We compress the coefficients spatially using existing point cloud compression (PCC) methods. On the decoder side, the scene is rendered efficiently from any viewing direction by reconstructing the view map at each point. In contrast to multi-view image-based LF approaches, our method supports photo-realistic rendering of real-world scenes from arbitrary viewpoints, i.e., with an unlimited six degrees of freedom (6DOF). In terms of rate and distortion, experimental results show that our method achieves superior performance with lighter decoder complexity compared with a reference image-plus-geometry compression (IGC) scheme, indicating its potential in practical virtual and augmented reality applications.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 10

page 11

research
10/17/2022

S^3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint

In this paper, we address the "dual problem" of multi-view scene reconst...
research
04/20/2022

SILVR: A Synthetic Immersive Large-Volume Plenoptic Dataset

In six-degrees-of-freedom light-field (LF) experiences, the viewer's fre...
research
03/22/2023

MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation

We propose a scene-level inverse rendering framework that uses multi-vie...
research
08/17/2023

ImGeoNet: Image-induced Geometry-aware Voxel Representation for Multi-view 3D Object Detection

We propose ImGeoNet, a multi-view image-based 3D object detection framew...
research
10/09/2020

GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering

We present a simple yet powerful implicit neural function that can repre...
research
01/26/2016

Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

The ultimate goal of many image-based modeling systems is to render phot...
research
12/17/2020

Relightable 3D Head Portraits from a Smartphone Video

In this work, a system for creating a relightable 3D portrait of a human...

Please sign up or login with your details

Forgot password? Click here to reset