DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

05/19/2021
by   Shu-Yu Chen, et al.
0

Recent facial image synthesis methods have been mainly based on conditional generative models. Sketch-based conditions can effectively describe the geometry of faces, including the contours of facial components, hair structures, as well as salient edges (e.g., wrinkles) on face surfaces but lack effective control of appearance, which is influenced by color, material, lighting condition, etc. To have more control of generated results, one possible approach is to apply existing disentangling works to disentangle face images into geometry and appearance representations. However, existing disentangling methods are not optimized for human face editing, and cannot achieve fine control of facial details such as wrinkles. To address this issue, we propose DeepFaceEditing, a structured disentanglement framework specifically designed for face images to support face generation and editing with disentangled control of geometry and appearance. We adopt a local-to-global approach to incorporate the face domain knowledge: local component images are decomposed into geometry and appearance representations, which are fused consistently using a global fusion module to improve generation quality. We exploit sketches to assist in extracting a better geometry representation, which also supports intuitive geometry editing via sketching. The resulting method can either extract the geometry and appearance representations from face images, or directly extract the geometry representation from face sketches. Such representations allow users to easily edit and synthesize face images, with decoupled control of their geometry and appearance. Both qualitative and quantitative evaluations show the superior detail and appearance control abilities of our method compared to state-of-the-art methods.

READ FULL TEXT

page 1

page 6

page 8

page 9

page 10

page 12

page 13

page 14

research
11/15/2022

NeRFFaceEditing: Disentangled Face Editing in Neural Radiance Fields

Recent methods for synthesizing 3D-aware face images have achieved rapid...
research
02/19/2023

LC-NeRF: Local Controllable Face Generation in Neural Randiance Field

3D face generation has achieved high visual quality and 3D consistency t...
research
12/02/2016

A Visual Representation for Editing Face Images

We propose a new approach for editing face images, which enables numerou...
research
05/12/2022

F3A-GAN: Facial Flow for Face Animation with Generative Adversarial Networks

Formulated as a conditional generation problem, face animation aims at s...
research
03/05/2022

DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN

The research topic of sketch-to-portrait generation has witnessed a boos...
research
06/01/2020

Deep Generation of Face Images from Sketches

Recent deep image-to-image translation techniques allow fast generation ...
research
09/09/2018

Geometry-Aware Face Completion and Editing

Face completion is a challenging generation task because it requires gen...

Please sign up or login with your details

Forgot password? Click here to reset