LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation

12/28/2021
by   Shoya Matsumori, et al.
13

Text-guided image manipulation tasks have recently gained attention in the vision-and-language community. While most of the prior studies focused on single-turn manipulation, our goal in this paper is to address the more challenging multi-turn image manipulation (MTIM) task. Previous models for this task successfully generate images iteratively, given a sequence of instructions and a previously generated image. However, this approach suffers from under-generation and a lack of generated quality of the objects that are described in the instructions, which consequently degrades the overall performance. To overcome these problems, we present a novel architecture called a Visually Guided Language Attention GAN (LatteGAN). Here, we address the limitations of the previous approaches by introducing a Visually Guided Language Attention (Latte) module, which extracts fine-grained text representations for the generator, and a Text-Conditioned U-Net discriminator architecture, which discriminates both the global and local representations of fake or real images. Extensive experiments on two distinct MTIM datasets, CoDraw and i-CLEVR, demonstrate the state-of-the-art performance of the proposed model.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 8

page 9

page 13

research
09/21/2023

TextCLIP: Text-Guided Face Image Generation And Manipulation Without Adversarial Training

Text-guided image generation aimed to generate desired images conditione...
research
11/25/2022

Interactive Image Manipulation with Complex Text Instructions

Recently, text-guided image manipulation has received increasing attenti...
research
08/02/2023

LEMMA: Learning Language-Conditioned Multi-Robot Manipulation

Complex manipulation tasks often require robots with complementary capab...
research
12/06/2022

Diffusion-SDF: Text-to-Shape via Voxelized Diffusion

With the rising industrial attention to 3D virtual modeling technology, ...
research
12/16/2020

Visually Grounding Instruction for History-Dependent Manipulation

This paper emphasizes the importance of robot's ability to refer its tas...
research
06/21/2023

Local 3D Editing via 3D Distillation of CLIP Knowledge

3D content manipulation is an important computer vision task with many r...
research
02/22/2023

Entity-Level Text-Guided Image Manipulation

Existing text-guided image manipulation methods aim to modify the appear...

Please sign up or login with your details

Forgot password? Click here to reset