MOC-GAN: Mixing Objects and Captions to Generate Realistic Images

06/06/2021
by   Tao Ma, et al.
0

Generating images with conditional descriptions gains increasing interests in recent years. However, existing conditional inputs are suffering from either unstructured forms (captions) or limited information and expensive labeling (scene graphs). For a targeted scene, the core items, objects, are usually definite while their interactions are flexible and hard to clearly define. Thus, we introduce a more rational setting, generating a realistic image from the objects and captions. Under this setting, objects explicitly define the critical roles in the targeted images and captions implicitly describe their rich attributes and connections. Correspondingly, a MOC-GAN is proposed to mix the inputs of two modalities to generate realistic images. It firstly infers the implicit relations between object pairs from the captions to build a hidden-state scene graph. So a multi-layer representation containing objects, relations and captions is constructed, where the scene graph provides the structures of the scene and the caption provides the image-level guidance. Then a cascaded attentive generative network is designed to coarse-to-fine generate phrase patch by paying attention to the most relevant words in the caption. In addition, a phrase-wise DAMSM is proposed to better supervise the fine-grained phrase-patch consistency. On COCO dataset, our method outperforms the state-of-the-art methods on both Inception Score and FID while maintaining high visual quality. Extensive experiments demonstrate the unique features of our proposed method.

READ FULL TEXT

page 8

page 18

research
05/05/2019

PasteGAN: A Semi-Parametric Method to Generate Image from Scene Graph

Despite some exciting progress on high-quality image generation from str...
research
02/22/2018

ChatPainter: Improving Text to Image Generation using Dialogue

Synthesizing realistic images from text descriptions on a dataset like M...
research
07/22/2022

Rethinking the Reference-based Distinctive Image Captioning

Distinctive Image Captioning (DIC) – generating distinctive captions tha...
research
11/26/2019

Text2FaceGAN: Face Generation from Fine Grained Textual Descriptions

Powerful generative adversarial networks (GAN) have been developed to au...
research
12/17/2018

Multi Instance Learning For Unbalanced Data

In the context of Multi Instance Learning, we analyze the Single Instanc...
research
02/03/2023

DEVICE: DEpth and VIsual ConcEpts Aware Transformer for TextCaps

Text-based image captioning is an important but under-explored task, aim...
research
05/28/2021

Linguistic Structures as Weak Supervision for Visual Scene Graph Generation

Prior work in scene graph generation requires categorical supervision at...

Please sign up or login with your details

Forgot password? Click here to reset