Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models

12/12/2022
by   Jiabao Lei, et al.
0

We address the challenge of recovering an underlying scene geometry and colors from a sparse set of RGBD view observations. In this work, we present a new solution that sequentially generates novel RGBD views along a camera trajectory, and the scene geometry is simply the fusion result of these views. More specifically, we maintain an intermediate surface mesh used for rendering new RGBD views, which subsequently becomes complete by an inpainting network; each rendered RGBD view is later back-projected as a partial surface and is supplemented into the intermediate mesh. The use of intermediate mesh and camera projection helps solve the refractory problem of multi-view inconsistency. We practically implement the RGBD inpainting network as a versatile RGBD diffusion model, which is previously used for 2D generative modeling; we make a modification to its reverse diffusion process to enable our use. We evaluate our approach on the task of 3D scene synthesis from sparse RGBD inputs; extensive experiments on the ScanNet dataset demonstrate the superiority of our approach over existing ones. Project page: https://jblei.site/project-pages/rgbd-diffusion.html

READ FULL TEXT

page 1

page 4

page 7

page 8

research
09/07/2023

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

In this paper, we present a novel diffusion model called that generates ...
research
06/06/2023

DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views

Synthesizing novel view images from a few views is a challenging but pra...
research
04/19/2023

Reference-guided Controllable Inpainting of Neural Radiance Fields

The popularity of Neural Radiance Fields (NeRFs) for view synthesis has ...
research
06/29/2023

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

Single image 3D reconstruction is an important but challenging task that...
research
11/22/2022

DiffDreamer: Consistent Single-view Perpetual View Generation with Conditional Diffusion Models

Perpetual view generation – the task of generating long-range novel view...
research
04/25/2018

Fast View Synthesis with Deep Stereo Vision

Novel view synthesis is an important problem in computer vision and grap...
research
03/05/2023

SePaint: Semantic Map Inpainting via Multinomial Diffusion

Prediction beyond partial observations is crucial for robots to navigate...

Please sign up or login with your details

Forgot password? Click here to reset