3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective

04/27/2022
by   Zhedong Zheng, et al.
4

This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape, and texture of human clothing from a single 2D image. Compared with existing methods, we observe that three primary challenges remain: (1) the conventional template-based methods are limited to modeling non-rigid clothing objects, e.g., handbags and dresses, which are common in fashion images; (2) 3D ground-truth meshes of clothing are usually inaccessible due to annotation difficulties and time costs. (3) It remains challenging to simultaneously optimize four reconstruction factors, i.e., camera viewpoint, shape, texture, and illumination. The inherent ambiguity compromises the model training, such as the dilemma between a large shape with a remote camera or a small shape with a close camera. In an attempt to address the above limitations, we propose a causality-aware self-supervised learning method to adaptively reconstruct 3D non-rigid objects from 2D images without 3D annotations. In particular, to solve the inherent ambiguity among four implicit variables, i.e., camera position, shape, texture, and illumination, we study existing works and introduce an explainable structural causal map (SCM) to build our model. The proposed model structure follows the spirit of the causal map, which explicitly considers the prior template in the camera estimation and shape prediction. When optimization, the causality intervention tool, i.e., two expectation-maximization loops, is deeply embedded in our algorithm to (1) disentangle four encoders and (2) help the prior template update. Extensive experiments on two 2D fashion benchmarks, e.g., ATR, and Market-HQ, show that the proposed method could yield high-fidelity 3D reconstruction. Furthermore, we also verify the scalability of the proposed method on a fine-grained bird dataset, i.e., CUB.

READ FULL TEXT

page 1

page 8

research
03/13/2020

Self-supervised Single-view 3D Reconstruction via Semantic Consistency

We learn a self-supervised, single-view 3D reconstruction model that pre...
research
11/16/2021

Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction from a Single Image

Reconstructing high-fidelity 3D facial texture from a single image is a ...
research
03/22/2021

Model-based 3D Hand Reconstruction via Self-Supervised Learning

Reconstructing a 3D hand from a single-view RGB image is challenging due...
research
07/21/2020

Shape and Viewpoint without Keypoints

We present a learning framework that learns to recover the 3D shape, pos...
research
01/24/2022

Consistent 3D Hand Reconstruction in Video via self-supervised Learning

We present a method for reconstructing accurate and consistent 3D hands ...
research
06/10/2021

To The Point: Correspondence-driven monocular 3D category reconstruction

We present To The Point (TTP), a method for reconstructing 3D objects fr...
research
08/13/2018

Incremental Non-Rigid Structure-from-Motion with Unknown Focal Length

The perspective camera and the isometric surface prior have recently gat...

Please sign up or login with your details

Forgot password? Click here to reset