Pixel-Level Domain Transfer

03/24/2016
by   Donggeun Yoo, et al.
0

We present an image-conditional image generation model. The model transfers an input domain to a target domain in semantic level, and generates the target image in pixel level. To generate realistic target images, we employ the real/fake-discriminator as in Generative Adversarial Nets, but also introduce a novel domain-discriminator to make the generated image relevant to the input image. We verify our model through a challenging task of generating a piece of clothing from an input image of a dressed person. We present a high quality clothing dataset containing the two domains, and succeed in demonstrating decent results.

READ FULL TEXT

page 2

page 13

page 14

page 15

research
06/27/2019

Adversarial Pixel-Level Generation of Semantic Images

Generative Adversarial Networks (GANs) have obtained extraordinary succe...
research
04/06/2022

Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training

Existing deep learning real denoising methods require a large amount of ...
research
05/12/2022

D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image Generation

As an important and challenging problem, few-shot image generation aims ...
research
03/17/2023

Unsupervised Domain Transfer with Conditional Invertible Neural Networks

Synthetic medical image generation has evolved as a key technique for ne...
research
06/06/2021

Alpha Matte Generation from Single Input for Portrait Matting

Portrait matting is an important research problem with a wide range of a...
research
12/12/2019

Zooming into Face Forensics: A Pixel-level Analysis

The stunning progress in face manipulation methods has made it possible ...
research
03/23/2021

Watermark Faker: Towards Forgery of Digital Image Watermarking

Digital watermarking has been widely used to protect the copyright and i...

Please sign up or login with your details

Forgot password? Click here to reset