LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration

10/23/2020
by   Bharat Lal Bhatnagar, et al.
0

We address the problem of fitting 3D human models to 3D scans of dressed humans. Classical methods optimize both the data-to-model correspondences and the human model parameters (pose and shape), but are reliable only when initialized close to the solution. Some methods initialize the optimization based on fully supervised correspondence predictors, which is not differentiable end-to-end, and can only process a single scan at a time. Our main contribution is LoopReg, an end-to-end learning framework to register a corpus of scans to a common 3D human model. The key idea is to create a self-supervised loop. A backward map, parameterized by a Neural Network, predicts the correspondence from every scan point to the surface of the human model. A forward map, parameterized by a human model, transforms the corresponding points back to the scan based on the model parameters (pose and shape), thus closing the loop. Formulating this closed loop is not straightforward because it is not trivial to force the output of the NN to be on the surface of the human model - outside this surface the human model is not even defined. To this end, we propose two key innovations. First, we define the canonical surface implicitly as the zero level set of a distance field in R3, which in contrast to morecommon UV parameterizations, does not require cutting the surface, does not have discontinuities, and does not induce distortion. Second, we diffuse the human model to the 3D domain R3. This allows to map the NN predictions forward,even when they slightly deviate from the zero level set. Results demonstrate that we can train LoopRegmainly self-supervised - following a supervised warm-start, the model becomes increasingly more accurate as additional unlabelled raw scans are processed. Our code and pre-trained models can be downloaded for research.

READ FULL TEXT

page 4

page 5

page 17

page 18

research
04/07/2021

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks

We present SCANimate, an end-to-end trainable framework that takes raw 3...
research
08/15/2021

U-mesh: Human Correspondence Matching with Mesh Convolutional Networks

The proliferation of 3D scanning technology has driven a need for method...
research
02/18/2023

Invertible Neural Skinning

Building animatable and editable models of clothed humans from raw 3D sc...
research
06/17/2021

THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers

We present THUNDR, a transformer-based deep neural network methodology t...
research
02/13/2019

3D Face Modeling from Diverse Raw Scan Data

Traditional 3D models learn a latent representation of faces using linea...
research
08/16/2020

Neural Descent for Visual 3D Human Pose and Shape

We present deep neural network methodology to reconstruct the 3d pose an...
research
12/06/2018

Self-supervised Learning of Dense Shape Correspondence

We introduce the first completely unsupervised correspondence learning a...

Please sign up or login with your details

Forgot password? Click here to reset