FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization

12/02/2021
by   Xingchao Liu, et al.
3

Generating images from natural language instructions is an intriguing yet highly challenging task. We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text. Compared to traditional methods that train generative models from text to image starting from scratch, the CLIP+GAN approach is training-free, zero shot and can be easily customized with different generators. However, optimizing CLIP score in the GAN space casts a highly challenging optimization problem and off-the-shelf optimizers such as Adam fail to yield satisfying results. In this work, we propose a FuseDream pipeline, which improves the CLIP+GAN approach with three key techniques: 1) an AugCLIP score which robustifies the CLIP objective by introducing random augmentation on image. 2) a novel initialization and over-parameterization strategy for optimization which allows us to efficiently navigate the non-convex landscape in GAN space. 3) a composed generation technique which, by leveraging a novel bi-level optimization formulation, can compose multiple images to extend the GAN space and overcome the data-bias. When promoted by different input text, FuseDream can generate high-quality images with varying objects, backgrounds, artistic styles, even novel counterfactual concepts that do not appear in the training data of the GAN we use. Quantitatively, the images generated by FuseDream yield top-level Inception score and FID score on MS COCO dataset, without additional architecture design or training. Our code is publicly available at <https://github.com/gnobitab/FuseDream>.

READ FULL TEXT

page 7

page 8

page 13

page 14

page 15

page 16

page 17

page 18

research
08/03/2021

Cycle-Consistent Inverse GAN for Text-to-Image Synthesis

This paper investigates an open research task of text-to-image synthesis...
research
04/04/2023

Cross-modulated Few-shot Image Generation for Colorectal Tissue Classification

In this work, we propose a few-shot colorectal tissue image generation m...
research
06/26/2023

A Simple and Effective Baseline for Attentional Generative Adversarial Networks

Synthesising a text-to-image model of high-quality images by guiding the...
research
04/26/2023

TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation

We propose TR0N, a highly general framework to turn pre-trained uncondit...
research
04/29/2023

LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral Image Generation with Variance Regularization

Deep learning methods are state-of-the-art for spectral image (SI) compu...
research
02/02/2021

Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

In this research work we present CLIP-GLaSS, a novel zero-shot framework...
research
03/02/2023

X Fuse: Fusing Visual Information in Text-to-Image Generation

We introduce X Fuse, a general approach for conditioning on visual inf...

Please sign up or login with your details

Forgot password? Click here to reset