DeepAI AI Chat
Log In Sign Up

Simultaneous Multiple-Prompt Guided Generation Using Differentiable Optimal Transport

by   Yingtao Tian, et al.

Recent advances in deep learning, such as powerful generative models and joint text-image embeddings, have provided the computational creativity community with new tools, opening new perspectives for artistic pursuits. Text-to-image synthesis approaches that operate by generating images from text cues provide a case in point. These images are generated with a latent vector that is progressively refined to agree with text cues. To do so, patches are sampled within the generated image, and compared with the text prompts in the common text-image embedding space; The latent vector is then updated, using gradient descent, to reduce the mean (average) distance between these patches and text cues. While this approach provides artists with ample freedom to customize the overall appearance of images, through their choice in generative models, the reliance on a simple criterion (mean of distances) often causes mode collapse: The entire image is drawn to the average of all text cues, thereby losing their diversity. To address this issue, we propose using matching techniques found in the optimal transport (OT) literature, resulting in images that are able to reflect faithfully a wide diversity of prompts. We provide numerous illustrations showing that OT avoids some of the pitfalls arising from estimating vectors with mean distances, and demonstrate the capacity of our proposed method to perform better in experiments, qualitatively and quantitatively.


page 1

page 4

page 5

page 7


Entropy-regularized Optimal Transport Generative Models

We investigate the use of entropy-regularized optimal transport (EOT) co...

On the Existence of Optimal Transport Gradient for Learning Generative Models

The use of optimal transport cost for learning generative models has bec...

Refining Deep Generative Models via Wasserstein Gradient Flows

Deep generative modeling has seen impressive advances in recent years, t...

Random Network Distillation as a Diversity Metric for Both Image and Text Generation

Generative models are increasingly able to produce remarkably high quali...

Self-Supervised Image-to-Text and Text-to-Image Synthesis

A comprehensive understanding of vision and language and their interrela...

Improving Word Recognition using Multiple Hypotheses and Deep Embeddings

We propose a novel scheme for improving the word recognition accuracy us...

Is the Elephant Flying? Resolving Ambiguities in Text-to-Image Generative Models

Natural language often contains ambiguities that can lead to misinterpre...