Diffusion-SDF: Text-to-Shape via Voxelized Diffusion

12/06/2022
by   Muheng Li, et al.
0

With the rising industrial attention to 3D virtual modeling technology, generating novel 3D content based on specified conditions (e.g. text) has become a hot issue. In this paper, we propose a new generative 3D modeling framework called Diffusion-SDF for the challenging task of text-to-shape synthesis. Previous approaches lack flexibility in both 3D data representation and shape generation, thereby failing to generate highly diversified 3D shapes conforming to the given text descriptions. To address this, we propose a SDF autoencoder together with the Voxelized Diffusion model to learn and generate representations for voxelized signed distance fields (SDFs) of 3D shapes. Specifically, we design a novel UinU-Net architecture that implants a local-focused inner network inside the standard U-Net architecture, which enables better reconstruction of patch-independent SDF representations. We extend our approach to further text-to-shape tasks including text-conditioned shape completion and manipulation. Experimental results show that Diffusion-SDF is capable of generating both high-quality and highly diversified 3D shapes that conform well to the given text descriptions. Diffusion-SDF has demonstrated its superiority compared to previous state-of-the-art text-to-shape approaches.

READ FULL TEXT

page 3

page 6

page 7

page 11

page 12

page 13

research
08/05/2023

Sketch and Text Guided Diffusion Model for Colored Point Cloud Generation

Diffusion probabilistic models have achieved remarkable success in text ...
research
07/19/2022

ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model

We present ShapeCrafter, a neural network for recursive text-conditioned...
research
12/08/2022

SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation

In this work, we present a novel framework built to simplify 3D asset ge...
research
12/01/2022

Shape-Guided Diffusion with Inside-Outside Attention

Shape can specify key object constraints, yet existing text-to-image dif...
research
02/01/2023

Neural Wavelet-domain Diffusion for 3D Shape Generation, Inversion, and Manipulation

This paper presents a new approach for 3D shape generation, inversion, a...
research
05/24/2023

Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape

Synthesizing novel 3D models that resemble the input example has long be...
research
12/28/2021

LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation

Text-guided image manipulation tasks have recently gained attention in t...

Please sign up or login with your details

Forgot password? Click here to reset