Sparse3D: Distilling Multiview-Consistent Diffusion for Object Reconstruction from Sparse Views

08/27/2023
by   Zi-Xin Zou, et al.
0

Reconstructing 3D objects from extremely sparse views is a long-standing and challenging problem. While recent techniques employ image diffusion models for generating plausible images at novel viewpoints or for distilling pre-trained diffusion priors into 3D representations using score distillation sampling (SDS), these methods often struggle to simultaneously achieve high-quality, consistent, and detailed results for both novel-view synthesis (NVS) and geometry. In this work, we present Sparse3D, a novel 3D reconstruction method tailored for sparse view inputs. Our approach distills robust priors from a multiview-consistent diffusion model to refine a neural radiance field. Specifically, we employ a controller that harnesses epipolar features from input views, guiding a pre-trained diffusion model, such as Stable Diffusion, to produce novel-view images that maintain 3D consistency with the input. By tapping into 2D priors from powerful image diffusion models, our integrated model consistently delivers high-quality results, even when faced with open-world objects. To address the blurriness introduced by conventional SDS, we introduce the category-score distillation sampling (C-SDS) to enhance detail. We conduct experiments on CO3DV2 which is a multi-view dataset of real-world objects. Both quantitative and qualitative evaluations demonstrate that our approach outperforms previous state-of-the-art works on the metrics regarding NVS and geometry reconstruction.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 7

page 11

page 12

page 13

research
06/06/2023

DreamSparse: Escaping from Plato's Cave with 2D Frozen Diffusion Model Given Sparse Views

Synthesizing novel view images from a few views is a challenging but pra...
research
04/20/2023

Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion

We present Farm3D, a method to learn category-specific 3D reconstructors...
research
08/31/2023

MVDream: Multi-view Diffusion for 3D Generation

We propose MVDream, a multi-view diffusion model that is able to generat...
research
05/24/2023

Deceptive-NeRF: Enhancing NeRF Reconstruction using Pseudo-Observations from Diffusion Models

This paper introduces Deceptive-NeRF, a new method for enhancing the qua...
research
06/10/2022

NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors

Though Neural Radiance Field (NeRF) demonstrates compelling novel view s...
research
06/29/2023

ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models

Given sparse views of an object, estimating their camera poses is a long...
research
08/28/2023

HoloFusion: Towards Photo-realistic 3D Generative Modeling

Diffusion-based image generators can now produce high-quality and divers...

Please sign up or login with your details

Forgot password? Click here to reset