GLIGEN: Open-Set Grounded Text-to-Image Generation

01/17/2023
by   Yuheng Li, et al.
1

Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.

READ FULL TEXT

page 1

page 8

page 9

page 10

page 11

page 16

page 17

page 18

research
11/24/2022

Shifted Diffusion for Text-to-image Generation

We present Corgi, a novel method for text-to-image generation. Corgi is ...
research
06/23/2023

Zero-shot spatial layout conditioning for text-to-image diffusion models

Large-scale text-to-image diffusion models have significantly improved t...
research
01/12/2023

Guiding Text-to-Image Diffusion Model Towards Grounded Generation

The goal of this paper is to augment a pre-trained text-to-image diffusi...
research
04/28/2023

SceneGenie: Scene Graph Guided Diffusion Models for Image Synthesis

Text-conditioned image generation has made significant progress in recen...
research
05/23/2023

Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach

Recent text-to-image generation models have demonstrated impressive capa...
research
09/29/2021

Visually Grounded Concept Composition

We investigate ways to compose complex concepts in texts from primitive ...
research
05/12/2021

Connecting What to Say With Where to Look by Modeling Human Attention Traces

We introduce a unified framework to jointly model images, text, and huma...

Please sign up or login with your details

Forgot password? Click here to reset