High Quality Facial Surface and Texture Synthesis via Generative Adversarial Networks

08/24/2018
by   Ron Slossberg, et al.
0

In the past several decades, many attempts have been made to model synthetic realistic geometric data. The goal of such models is to generate plausible 3D geometries and textures. Perhaps the best known of its kind is the linear 3D morphable model (3DMM) for faces. Such models can be found at the core of many computer vision applications such as face reconstruction, recognition and authentication to name just a few. Generative adversarial networks (GANs) have shown great promise in imitating high dimensional data distributions. State of the art GANs are capable of performing tasks such as image to image translation as well as auditory and image signal synthesis, producing novel plausible samples from the data distribution at hand. Geometric data is generally more difficult to process due to the inherent lack of an intrinsic parametrization. By bringing geometric data into an aligned space, we are able to map the data onto a 2D plane using a universal parametrization. This alignment process allows for efficient processing of digitally scanned geometric data via image processing tools. Using this methodology, we propose a novel face synthesis model for generation of realistic facial textures together with their corresponding geometry. A GAN is employed in order to imitate the space of parametrized human textures, while corresponding facial geometries are generated by learning the best 3DMM coefficients for each texture. The generated textures are mapped back onto the corresponding geometries to obtain new generated high resolution 3D faces.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset