Identity-preserving Face Recovery from Portraits
Recovering the latent photorealistic faces from their artistic portraits aids human perception and facial analysis. However, recovering photorealistic faces from stylized portraits while preserving identity is challenging because the fine details of real faces can be distorted or lost in stylized images. In this paper, we present a new Identity-preserving Face Recovery from Portraits (IFRP) method to recover latent photorealistic faces from unaligned stylized portraits. Our IFRP method consists of two components: Style Removal Network (SRN) and Discriminative Network (DN). The SRN is designed to transfer feature maps of stylized images to the feature maps of the corresponding photorealistic faces. By embedding spatial transformer networks into the SRN, our method can compensate for misalignments of stylized faces automatically and output aligned realistic face images. The DN is used to enforce recovered face images to be similar to authentic faces. To ensure the identity preservation, we promote the recovered and ground-truth faces to share similar visual features via a distance measure which compares features of recovered and ground-truth faces extracted from a pre-trained VGG network. Our approach is evaluated on a large-scale synthesized dataset of real and stylized face pairs and outperforms the state-of-the-art methods. In addition, we demonstrate that our method can also recover photorealistic faces from unseen stylized portraits (unavailable in training) as well as original paintings.
READ FULL TEXT