Unsupervised Visual Attribute Transfer with Reconfigurable Generative Adversarial Networks

07/31/2017
by   Taeksoo Kim, et al.
0

Learning to transfer visual attributes requires supervision dataset. Corresponding images with varying attribute values with the same identity are required for learning the transfer function. This largely limits their applications, because capturing them is often a difficult task. To address the issue, we propose an unsupervised method to learn to transfer visual attribute. The proposed method can learn the transfer function without any corresponding images. Inspecting visualization results from various unsupervised attribute transfer tasks, we verify the effectiveness of the proposed method.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

research
11/26/2017

Improved Neural Text Attribute Transfer with Non-parallel Data

Text attribute transfer using non-parallel data requires methods that ca...
research
12/28/2019

MulGAN: Facial Attribute Editing by Exemplar

Recent studies on face attribute editing by exemplars have achieved prom...
research
05/30/2019

Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation

Unsupervised text attribute transfer automatically transforms a text to ...
research
05/02/2017

Visual Attribute Transfer through Deep Image Analogy

We propose a new technique for visual attribute transfer across images t...
research
05/04/2021

Effectively Leveraging Attributes for Visual Similarity

Measuring similarity between two images often requires performing comple...
research
11/24/2017

Cascade Attribute Learning Network

We propose the cascade attribute learning network (CALNet), which can le...
research
02/07/2020

Local Facial Attribute Transfer through Inpainting

The term attribute transfer refers to the tasks of altering images in su...

Please sign up or login with your details

Forgot password? Click here to reset