Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild

11/25/2019 ∙ by Shangzhe Wu, et al. ∙ 26

We propose a method to learn 3D deformable object categories from raw single-view images, without external supervision. The method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and illumination. In order to disentangle these components without supervision, we use the fact that many object categories have, at least in principle, a symmetric structure. We show that reasoning about illumination allows us to exploit the underlying object symmetry even if the appearance is not symmetric due to shading. Furthermore, we model objects that are probably, but not certainly, symmetric by predicting a symmetry probability map, learned end-to-end with the other components of the model. Our experiments show that this method can recover very accurately the 3D shape of human faces, cat faces and cars from single-view images, without any supervision or a prior shape model. On benchmarks, we demonstrate superior accuracy compared to another method that uses supervision at the level of 2D image correspondences.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

page 13

page 14

page 15

page 16

page 17

Code Repositories

unsup3d

(CVPR'20 Oral) Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild


view repo

unsup3D_pytorch3d

(CVPR'20 Best Paper) Unsup3D SoftRas


view repo

CVPR-2020

CVPR 2020论文相关分享


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.