One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

by   Minghua Liu, et al.

Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.


page 2

page 5

page 7

page 8

page 9

page 10

page 11

page 12


Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data

We present Viewset Diffusion: a framework for training image-conditioned...

PC^2: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction

Reconstructing the 3D shape of an object from a single RGB image is a lo...

PlaneFormers: From Sparse View Planes to 3D Reconstruction

We present an approach for the planar surface reconstruction of a scene ...

Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery

We present a novel pipeline for learning the conditional distribution of...

Multi-view 3D Object Reconstruction and Uncertainty Modelling with Neural Shape Prior

3D object reconstruction is important for semantic scene understanding. ...

Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models

We address the challenge of recovering an underlying scene geometry and ...

3D-LatentMapper: View Agnostic Single-View Reconstruction of 3D Shapes

Computer graphics, 3D computer vision and robotics communities have prod...

Please sign up or login with your details

Forgot password? Click here to reset