DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis

08/27/2021
by   Shulan Ruan, et al.
4

Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency. Previous methods usually generate an initial image with sentence embedding and then refine it with fine-grained word embedding. Despite the significant progress, the 'aspect' information (e.g., red eyes) contained in the text, referring to several words rather than a word that depicts 'a particular part or feature of something', is often ignored, which is highly helpful for synthesizing image details. How to make better utilization of aspect information in text-to-image synthesis still remains an unresolved challenge. To address this problem, in this paper, we propose a Dynamic Aspect-awarE GAN (DAE-GAN) that represents text information comprehensively from multiple granularities, including sentence-level, word-level, and aspect-level. Moreover, inspired by human learning behaviors, we develop a novel Aspect-aware Dynamic Re-drawer (ADR) for image refinement, in which an Attended Global Refinement (AGR) module and an Aspect-aware Local Refinement (ALR) module are alternately employed. AGR utilizes word-level embedding to globally enhance the previously generated image, while ALR dynamically employs aspect-level embedding to refine image details from a local perspective. Finally, a corresponding matching loss function is designed to ensure the text-image semantic consistency at different levels. Extensive experiments on two well-studied and publicly available datasets (i.e., CUB-200 and COCO) demonstrate the superiority and rationality of our method.

READ FULL TEXT

page 1

page 7

page 8

research
02/17/2023

Fine-grained Cross-modal Fusion based Refinement for Text-to-Image Synthesis

Text-to-image synthesis refers to generating visual-realistic and semant...
research
03/14/2019

MirrorGAN: Learning Text-to-image Generation by Redescription

Generating an image from a given text description has two goals: visual ...
research
04/02/2019

DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis

In this paper, we focus on generating realistic images from text descrip...
research
10/15/2021

Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image Synthesis

Synthesizing high-quality, realistic images from text-descriptions is a ...
research
06/11/2019

Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding

Generating long and informative review text is a challenging natural lan...
research
10/27/2022

Towards Better Text-Image Consistency in Text-to-Image Generation

Generating consistent and high-quality images from given texts is essent...
research
05/29/2023

A Hierarchical Context-aware Modeling Approach for Multi-aspect and Multi-granular Pronunciation Assessment

Automatic Pronunciation Assessment (APA) plays a vital role in Computer-...

Please sign up or login with your details

Forgot password? Click here to reset