DiT-3D: Exploring Plain Diffusion Transformers for 3D Shape Generation

07/04/2023
by   Shentong Mo, et al.
0

Recent Diffusion Transformers (e.g., DiT) have demonstrated their powerful effectiveness in generating high-quality 2D images. However, it is still being determined whether the Transformer architecture performs equally well in 3D shape generation, as previous 3D diffusion methods mostly adopted the U-Net architecture. To bridge this gap, we propose a novel Diffusion Transformer for 3D shape generation, namely DiT-3D, which can directly operate the denoising process on voxelized point clouds using plain Transformers. Compared to existing U-Net approaches, our DiT-3D is more scalable in model size and produces much higher quality generations. Specifically, the DiT-3D adopts the design philosophy of DiT but modifies it by incorporating 3D positional and patch embeddings to adaptively aggregate input from voxelized point clouds. To reduce the computational cost of self-attention in 3D shape generation, we incorporate 3D window attention into Transformer blocks, as the increased 3D token length resulting from the additional dimension of voxels can lead to high computation. Finally, linear and devoxelization layers are used to predict the denoised point clouds. In addition, our transformer architecture supports efficient fine-tuning from 2D to 3D, where the pre-trained DiT-2D checkpoint on ImageNet can significantly improve DiT-3D on ShapeNet. Experimental results on the ShapeNet dataset demonstrate that the proposed DiT-3D achieves state-of-the-art performance in high-fidelity and diverse 3D point cloud generation. In particular, our DiT-3D decreases the 1-Nearest Neighbor Accuracy of the state-of-the-art method by 4.59 and increases the Coverage metric by 3.51 when evaluated on Chamfer Distance.

READ FULL TEXT

page 2

page 15

page 16

page 17

page 18

research
03/29/2023

Self-positioning Point-based Transformer for Point Cloud Understanding

Transformers have shown superior performance on various computer vision ...
research
10/13/2022

SWFormer: Sparse Window Transformer for 3D Object Detection in Point Clouds

3D object detection in point clouds is a core component for modern robot...
research
07/22/2023

Patch-Wise Point Cloud Generation: A Divide-and-Conquer Approach

A generative model for high-fidelity point clouds is of great importance...
research
02/28/2023

Applying Plain Transformers to Real-World Point Clouds

Due to the lack of inductive bias, transformer-based models usually requ...
research
12/19/2022

Scalable Diffusion Models with Transformers

We explore a new class of diffusion models based on the transformer arch...
research
04/06/2019

Modeling Point Clouds with Self-Attention and Gumbel Subset Sampling

Geometric deep learning is increasingly important thanks to the populari...
research
12/22/2022

Scalable Adaptive Computation for Iterative Generation

We present the Recurrent Interface Network (RIN), a neural net architect...

Please sign up or login with your details

Forgot password? Click here to reset