BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer

12/12/2022
by   Qixin Yan, et al.
8

In this work, we propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC). A unique departure from prior methods that leverage global attention, simply concatenate features, or implicitly manipulate features in latent space, we propose a component-specific correspondence to directly transfer the makeup style of a reference image to the corresponding components (e.g., skin, lips, eyes) of a source image, making elaborate and accurate local makeup transfer. As an auxiliary, the long-range visual dependencies of Transformer are introduced for effective global makeup transfer. Instead of the commonly used cycle structure that is complex and unstable, we employ a content consistency loss coupled with a content encoder to implement efficient single-path makeup transfer. The key insights of this study are modeling component-specific correspondence for local makeup transfer, capturing long-range dependencies for global makeup transfer, and enabling efficient makeup transfer via a single-path structure. We also contribute BeautyFace, a makeup transfer dataset to supplement existing datasets. This dataset contains 3,000 faces, covering more diverse makeup styles, face poses, and races. Each face has annotated parsing map. Extensive experiments demonstrate the effectiveness of our method against state-of-the-art methods. Besides, our method is appealing as it is with only 1M parameters, outperforming the state-of-the-art methods (BeautyGAN: 8.43M, PSGAN: 12.62M, SCGAN: 15.30M, CPM: 9.24M, SSAT: 10.48M).

READ FULL TEXT

page 2

page 6

page 7

page 8

page 11

page 12

page 13

research
05/30/2021

StyTr^2: Unbiased Image Style Transfer with Transformers

The goal of image style transfer is to render an image with artistic fea...
research
03/27/2020

Local Facial Makeup Transfer via Disentangled Representation

Facial makeup transfer aims to render a non-makeup face image in an arbi...
research
01/11/2021

ORDNet: Capturing Omni-Range Dependencies for Scene Parsing

Learning to capture dependencies between spatial positions is essential ...
research
09/02/2023

Few shot font generation via transferring similarity guided global style and quantization local style

Automatic few-shot font generation (AFFG), aiming at generating new font...
research
04/25/2019

LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup

We propose a local adversarial disentangling network (LADN) for facial m...
research
09/13/2021

Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN

We present an algorithm for re-rendering a person from a single image un...
research
12/07/2021

SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal

Makeup transfer is not only to extract the makeup style of the reference...

Please sign up or login with your details

Forgot password? Click here to reset