Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image Synthesis

10/15/2021
by   Amrit Diggavi Seshadri, et al.
0

Synthesizing high-quality, realistic images from text-descriptions is a challenging task, and current methods synthesize images from text in a multi-stage manner, typically by first generating a rough initial image and then refining image details at subsequent stages. However, existing methods that follow this paradigm suffer from three important limitations. Firstly, they synthesize initial images without attempting to separate image attributes at a word-level. As a result, object attributes of initial images (that provide a basis for subsequent refinement) are inherently entangled and ambiguous in nature. Secondly, by using common text-representations for all regions, current methods prevent us from interpreting text in fundamentally different ways at different parts of an image. Different image regions are therefore only allowed to assimilate the same type of information from text at each refinement stage. Finally, current methods generate refinement features only once at each refinement stage and attempt to address all image aspects in a single shot. This single-shot refinement limits the precision with which each refinement stage can learn to improve the prior image. Our proposed method introduces three novel components to address these shortcomings: (1) An initial generation stage that explicitly generates separate sets of image features for each word n-gram. (2) A spatial dynamic memory module for refinement of images. (3) An iterative multi-headed mechanism to make it easier to improve upon multiple image aspects. Experimental results demonstrate that our Multi-Headed Spatial Dynamic Memory image refinement with our Multi-Tailed Word-level Initial Generation (MSMT-GAN) performs favourably against the previous state of the art on the CUB and COCO datasets.

READ FULL TEXT

page 8

page 9

page 12

page 14

research
04/02/2019

DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis

In this paper, we focus on generating realistic images from text descrip...
research
07/02/2020

PerceptionGAN: Real-world Image Construction from Provided Text through Perceptual Understanding

Generating an image from a provided descriptive text is quite a challeng...
research
11/04/2022

OSIC: A New One-Stage Image Captioner Coined

Mainstream image caption models are usually two-stage captioners, i.e., ...
research
08/27/2021

DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis

Text-to-image synthesis refers to generating an image from a given text ...
research
04/23/2020

Efficient Neural Architecture for Text-to-Image Synthesis

Text-to-image synthesis is the task of generating images from text descr...
research
07/10/2020

FC2RN: A Fully Convolutional Corner Refinement Network for Accurate Multi-Oriented Scene Text Detection

Recent scene text detection works mainly focus on curve text detection. ...
research
07/27/2022

End-to-end Graph-constrained Vectorized Floorplan Generation with Panoptic Refinement

The automatic generation of floorplans given user inputs has great poten...

Please sign up or login with your details

Forgot password? Click here to reset