-
GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction
In the past few years, a lot of work has been done towards reconstructin...
read it
-
Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images Using Graph Convolutional Networks
3D Morphable Model (3DMM) based methods have achieved great success in r...
read it
-
High Quality Facial Surface and Texture Synthesis via Generative Adversarial Networks
In the past several decades, many attempts have been made to model synth...
read it
-
High Fidelity Face Manipulation with Extreme Pose and Expression
Face manipulation has shown remarkable advances with the flourish of Gen...
read it
-
TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures
We tackle the problem of texture synthesis in the setting where many inp...
read it
-
Cosmetic-Aware Makeup Cleanser
Face verification aims at determining whether a pair of face images belo...
read it
-
Smart, Sparse Contours to Represent and Edit Images
We study the problem of reconstructing an image from information stored ...
read it
OSTeC: One-Shot Texture Completion
The last few years have witnessed the great success of non-linear generative models in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction and pose manipulation from a single image approaches still rely on large and clean face datasets to train image-to-image Generative Adversarial Networks (GANs). Yet the collection of such a large scale high-resolution 3D texture dataset is still very costly and difficult to maintain age/ethnicity balance. Moreover, regression-based approaches suffer from generalization to the in-the-wild conditions and are unable to fine-tune to a target-image. In this work, we propose an unsupervised approach for one-shot 3D facial texture completion that does not require large-scale texture datasets, but rather harnesses the knowledge stored in 2D face generators. The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator, based on the visible parts. Finally, we stitch the most visible textures at different angles in the UV image-plane. Further, we frontalize the target image by projecting the completed texture into the generator. The qualitative and quantitative experiments demonstrate that the completed UV textures and frontalized images are of high quality, resembles the original identity, can be used to train a texture GAN model for 3DMM fitting and improve pose-invariant face recognition.
READ FULL TEXT
Comments
There are no comments yet.