Persistent Nature: A Generative Model of Unbounded 3D Worlds

03/23/2023
by   Lucy Chai, et al.
0

Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions. We investigate the task of unconditionally synthesizing unbounded nature scenes, enabling arbitrarily large camera motion while maintaining a persistent 3D world model. Our scene representation consists of an extendable, planar scene layout grid, which can be rendered from arbitrary camera poses via a 3D decoder and volume rendering, and a panoramic skydome. Based on this representation, we learn a generative world model solely from single-view internet photos. Our method enables simulating long flights through 3D landscapes, while maintaining global scene consistency–for instance, returning to the starting point yields the same view of the scene. Our approach enables scene extrapolation beyond the fixed bounds of current 3D generative models, while also supporting a persistent, camera-independent world representation that stands in contrast to auto-regressive 3D prediction models. Our project page: https://chail.github.io/persistent-nature/.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 14

page 17

page 19

research
11/22/2022

DiffDreamer: Consistent Single-view Perpetual View Generation with Conditional Diffusion Models

Perpetual view generation – the task of generating long-range novel view...
research
03/31/2021

CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields

Tremendous progress in deep generative models has led to photorealistic ...
research
02/02/2023

SceneScape: Text-Driven Consistent Scene Generation

We propose a method for text-driven perpetual view generation – synthesi...
research
07/27/2022

GAUDI: A Neural Architect for Immersive 3D Scene Generation

We introduce GAUDI, a generative model capable of capturing the distribu...
research
12/03/2018

DeepVoxels: Learning Persistent 3D Feature Embeddings

In this work, we address the lack of 3D understanding of generative neur...
research
04/21/2023

Long-Term Photometric Consistent Novel View Synthesis with Diffusion Models

Novel view synthesis from a single input image is a challenging task, wh...
research
11/30/2022

SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene

Generative models have shown great promise in synthesizing photorealisti...

Please sign up or login with your details

Forgot password? Click here to reset