Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation

03/24/2023
by   Rui Chen, et al.
0

Automatic 3D content creation has achieved rapid progress recently due to the availability of pre-trained, large language models and image diffusion models, forming the emerging topic of text-to-3D content creation. Existing text-to-3D methods commonly use implicit scene representations, which couple the geometry and appearance via volume rendering and are suboptimal in terms of recovering finer geometries and achieving photorealistic rendering; consequently, they are less effective for generating high-quality 3D assets. In this work, we propose a new method of Fantasia3D for high-quality text-to-3D content creation. Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance. For geometry learning, we rely on a hybrid scene representation, and propose to encode surface normal extracted from the representation as the input of the image diffusion model. For appearance modeling, we introduce the spatially varying bidirectional reflectance distribution function (BRDF) into the text-to-3D task, and learn the surface material for photorealistic rendering of the generated surface. Our disentangled framework is more compatible with popular graphics engines, supporting relighting, editing, and physical simulation of the generated 3D assets. We conduct thorough experiments that show the advantages of our method over existing ones under different text-to-3D task settings. Project page and source codes: https://fantasia3d.github.io/.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 8

research
04/03/2023

NeMF: Inverse Volume Rendering with Neural Microflake Field

Recovering the physical attributes of an object's appearance from its im...
research
08/20/2018

Image-based remapping of spatially-varying material appearance

BRDF models are ubiquitous tools for the representation of material appe...
research
10/20/2022

TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition

Creation of 3D content by stylization is a promising yet challenging pro...
research
08/21/2023

FocalDreamer: Text-driven 3D Editing via Focal-fusion Assembly

While text-3D editing has made significant strides in leveraging score d...
research
08/21/2022

Vox-Surf: Voxel-based Implicit Surface Representation

Virtual content creation and interaction play an important role in moder...
research
08/21/2023

TADA! Text to Animatable Digital Avatars

We introduce TADA, a simple-yet-effective approach that takes textual de...
research
11/15/2022

NeRFFaceEditing: Disentangled Face Editing in Neural Radiance Fields

Recent methods for synthesizing 3D-aware face images have achieved rapid...

Please sign up or login with your details

Forgot password? Click here to reset