DCT-Net: Domain-Calibrated Translation for Portrait Stylization

07/06/2022
by   Yifang Men, et al.
2

This paper introduces DCT-Net, a novel image translation architecture for few-shot portrait stylization. Given limited style exemplars (∼100), the new architecture can produce high-quality style transfer results with advanced ability to synthesize high-fidelity contents and strong generality to handle complicated scenes (e.g., occlusions and accessories). Moreover, it enables full-body image translation via one elegant evaluation network trained by partial observations (i.e., stylized heads). Few-shot learning based style transfer is challenging since the learned model can easily become overfitted in the target domain, due to the biased distribution formed by only a few training examples. This paper aims to handle the challenge by adopting the key idea of "calibration first, translation later" and exploring the augmented global structure with locally-focused translation. Specifically, the proposed DCT-Net consists of three modules: a content adapter borrowing the powerful prior from source photos to calibrate the content distribution of target samples; a geometry expansion module using affine transformations to release spatially semantic constraints; and a texture translation module leveraging samples produced by the calibrated distribution to learn a fine-grained conversion. Experimental results demonstrate the proposed method's superiority over the state of the art in head stylization and its effectiveness on full image translation with adaptive deformations.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

research
09/30/2022

Diffusion-based Image Translation using Disentangled Style and Content Representation

Diffusion-based image translation guided by semantic texts or a single t...
research
03/27/2023

Training-free Style Transfer Emerges from h-space in Diffusion models

Diffusion models (DMs) synthesize high-quality images in various domains...
research
11/19/2021

Global and Local Alignment Networks for Unpaired Image-to-Image Translation

The goal of unpaired image-to-image translation is to produce an output ...
research
11/14/2017

XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings

Style transfer usually refers to the task of applying color and texture ...
research
10/20/2021

STALP: Style Transfer with Auxiliary Limited Pairing

We present an approach to example-based stylization of images that uses ...
research
01/28/2023

Few-shot Face Image Translation via GAN Prior Distillation

Face image translation has made notable progress in recent years. Howeve...
research
01/16/2021

Free Lunch for Few-shot Learning: Distribution Calibration

Learning from a limited number of samples is challenging since the learn...

Please sign up or login with your details

Forgot password? Click here to reset