Describe What to Change: A Text-guided Unsupervised Image-to-Image Translation Approach

08/10/2020
by   Yahui Liu, et al.
15

Manipulating visual attributes of images through human-written text is a very challenging task. On the one hand, models have to learn the manipulation without the ground truth of the desired output. On the other hand, models have to deal with the inherent ambiguity of natural language. Previous research usually requires either the user to describe all the characteristics of the desired image or to use richly-annotated image captioning datasets. In this work, we propose a novel unsupervised approach, based on image-to-image translation, that alters the attributes of a given image through a command-like sentence such as "change the hair color to black". Contrarily to state-of-the-art approaches, our model does not require a human-annotated dataset nor a textual description of all the attributes of the desired image, but only those that have to be modified. Our proposed model disentangles the image content from the visual attributes, and it learns to modify the latter using the textual description, before generating a new image from the content and the modified attribute representation. Because text might be inherently ambiguous (blond hair may refer to different shadows of blond, e.g. golden, icy, sandy), our method generates multiple stochastic versions of the same translation. Experiments show that the proposed model achieves promising performances on two large-scale public datasets: CelebA and CUB. We believe our approach will pave the way to new avenues of research combining textual and speech commands with visual attributes.

READ FULL TEXT

page 2

page 8

page 15

page 16

page 17

page 18

page 19

page 20

research
08/20/2019

RelGAN: Multi-Domain Image-to-Image Translation via Relative Attributes

Multi-domain image-to-image translation has gained increasing attention ...
research
11/09/2020

MUSE: Illustrating Textual Attributes by Portrait Generation

We propose a novel approach, MUSE, to illustrate textual attributes visu...
research
02/12/2020

Image-to-Image Translation with Text Guidance

The goal of this paper is to embed controllable factors, i.e., natural l...
research
11/26/2021

TRIP: Refining Image-to-Image Translation via Rival Preferences

Relative attribute (RA), referring to the preference over two images on ...
research
12/19/2018

Training on Art Composition Attributes to Influence CycleGAN Art Generation

I consider how to influence CycleGAN, image-to-image translation, by usi...
research
05/27/2020

Network Fusion for Content Creation with Conditional INNs

Artificial Intelligence for Content Creation has the potential to reduce...
research
04/01/2023

PrefGen: Preference Guided Image Generation with Relative Attributes

Deep generative models have the capacity to render high fidelity images ...

Please sign up or login with your details

Forgot password? Click here to reset