CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers

04/28/2022
by   Ming Ding, et al.
0

The development of the transformer-based text-to-image models are impeded by its slow generation and complexity for high-resolution images. In this work, we put forward a solution based on hierarchical transformers and local parallel auto-regressive generation. We pretrain a 6B-parameter transformer with a simple and flexible self-supervised task, Cross-modal general language model (CogLM), and finetune it for fast super-resolution. The new text-to-image system, CogView2, shows very competitive generation compared to concurrent state-of-the-art DALL-E-2, and naturally supports interactive text-guided editing on images.

READ FULL TEXT

page 1

page 6

research
05/26/2021

CogView: Mastering Text-to-Image Generation via Transformers

Text-to-Image generation in the general domain has long been an open pro...
research
05/10/2022

Transformer-based Cross-Modal Recipe Embeddings with Large Batch Training

In this paper, we present a cross-modal recipe retrieval framework, Tran...
research
11/22/2021

L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transfo...
research
05/17/2020

T-VSE: Transformer-Based Visual Semantic Embedding

Transformer models have recently achieved impressive performance on NLP ...
research
02/15/2018

Image Transformer

Image generation has been successfully cast as an autoregressive sequenc...
research
04/27/2023

IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers

Scalable Vector Graphics (SVG) is a popular vector image format that off...
research
05/26/2021

Aggregating Nested Transformers

Although hierarchical structures are popular in recent vision transforme...

Please sign up or login with your details

Forgot password? Click here to reset