3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation

12/02/2022
by   Zutao Jiang, et al.
1

Text-guided 3D object generation aims to generate 3D objects described by user-defined captions, which paves a flexible way to visualize what we imagined. Although some works have been devoted to solving this challenging task, these works either utilize some explicit 3D representations (e.g., mesh), which lack texture and require post-processing for rendering photo-realistic views; or require individual time-consuming optimization for every single case. Here, we make the first attempt to achieve generic text-guided cross-category 3D object generation via a new 3D-TOGO model, which integrates a text-to-views generation module and a views-to-3D generation module. The text-to-views generation module is designed to generate different views of the target 3D object given an input caption. prior-guidance, caption-guidance and view contrastive learning are proposed for achieving better view-consistency and caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D generation module to obtain the implicit 3D neural representation from the previously-generated views. Our 3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture and requires no time-cost optimization for every single caption. Besides, 3D-TOGO can control the category, color and shape of generated 3D objects with the input caption. Extensive experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects according to the input captions across 98 different categories, in terms of PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields.

READ FULL TEXT

page 16

page 18

page 19

page 20

page 21

page 23

page 24

page 27

research
12/02/2021

Zero-Shot Text-Guided Object Generation with Dream Fields

We combine neural rendering with multi-modal image and text representati...
research
09/30/2022

Understanding Pure CLIP Guidance for Voxel Grid NeRF Models

We explore the task of text to 3D object generation using CLIP. Specific...
research
11/08/2021

Evolving Evocative 2D Views of Generated 3D Objects

We present a method for jointly generating 3D models of objects and 2D r...
research
03/30/2023

AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control

Neural implicit fields are powerful for representing 3D scenes and gener...
research
03/31/2017

Transfer of View-manifold Learning to Similarity Perception of Novel Objects

We develop a model of perceptual similarity judgment based on re-trainin...
research
03/28/2023

X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance

Text-driven 3D stylization is a complex and crucial task in the fields o...
research
07/10/2023

Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models

The ability to generate diverse 3D articulated head avatars is vital to ...

Please sign up or login with your details

Forgot password? Click here to reset