MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing

10/30/2020
by   Zhentao Tan, et al.
8

Despite the recent success of face image generation with GANs, conditional hair editing remains challenging due to the under-explored complexity of its geometry and appearance. In this paper, we present MichiGAN (Multi-Input-Conditioned Hair Image GAN), a novel conditional image generation method for interactive portrait hair manipulation. To provide user control over every major hair visual factor, we explicitly disentangle hair into four orthogonal attributes, including shape, structure, appearance, and background. For each of them, we design a corresponding condition module to represent, process, and convert user inputs, and modulate the image generation pipeline in ways that respect the natures of different visual attributes. All these condition modules are integrated with the backbone generator to form the final end-to-end network, which allows fully-conditioned hair generation from multiple user inputs. Upon it, we also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs such as painted masks, guiding strokes, or reference photos to well-defined condition representations. Through extensive experiments and evaluations, we demonstrate the superiority of our method regarding both result quality and user controllability. The code is available at https://github.com/tzt101/MichiGAN.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 8

page 9

page 10

page 11

research
12/22/2020

GuidedStyle: Attribute Knowledge Guided Style Manipulation for Semantic Face Editing

Although significant progress has been made in synthesizing high-quality...
research
12/09/2021

HairCLIP: Design Your Hair by Text and Reference Image

Hair editing is an interesting and challenging problem in computer visio...
research
05/15/2020

Semantic Photo Manipulation with a Generative Image Prior

Despite the recent success of GANs in synthesizing images conditioned on...
research
11/24/2018

Keep Drawing It: Iterative language-based image generation and editing

Conditional text-to-image generation approaches commonly focus on genera...
research
04/10/2020

SESAME: Semantic Editing of Scenes by Adding, Manipulating or Erasing Objects

Recent advances in image generation gave rise to powerful tools for sema...
research
04/04/2022

Flexible Portrait Image Editing with Fine-Grained Control

We develop a new method for portrait image editing, which supports fine-...
research
09/17/2021

PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering

Generating portrait images by controlling the motions of existing faces ...

Please sign up or login with your details

Forgot password? Click here to reset