Unsupervised Depth Estimation, 3D Face Rotation and Replacement

03/25/2018
by   Joel Ruben Antony Moniz, et al.
0

We present an unsupervised approach for learning to estimate three dimensional (3D) facial structure from a single image while also predicting 3D viewpoint transformations that match a desired pose and facial geometry. We achieve this by inferring the depth of facial key-points in an input image in an unsupervised way. We show how it is possible to use these depths as intermediate computations within a new backpropable loss to predict the parameters of a 3D affine transformation matrix that maps inferred 3D key-points of an input face to corresponding 2D key-points on a desired target facial geometry or pose. Our resulting approach can therefore be used to infer plausible 3D transformations from one face pose to another, allowing faces to be frontalized, trans- formed into 3D models or even warped to another pose and facial geometry. Lastly, we identify certain shortcomings with our formulation, and explore adversarial image translation techniques as a post-processing step. Correspondingly, we explore several adversarial image transformation methods which allow us to re-synthesize complete head shots for faces re-targeted to different poses as well as repair images resulting from face replacements across identities.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset