StyleSwin: Transformer-based GAN for High-resolution Image Generation

12/20/2021
by   Bowen Zhang, et al.
15

Despite the tantalizing success in a broad of vision tasks, transformers have not yet demonstrated on-par ability as ConvNets in high-resolution image generative modeling. In this paper, we seek to explore using pure transformers to build a generative adversarial network for high-resolution image synthesis. To this end, we believe that local attention is crucial to strike the balance between computational efficiency and modeling capacity. Hence, the proposed generator adopts Swin transformer in a style-based architecture. To achieve a larger receptive field, we propose double attention which simultaneously leverages the context of the local and the shifted windows, leading to improved generation quality. Moreover, we show that offering the knowledge of the absolute position that has been lost in window-based transformers greatly benefits the generation quality. The proposed StyleSwin is scalable to high resolutions, with both the coarse geometry and fine structures benefit from the strong expressivity of transformers. However, blocking artifacts occur during high-resolution synthesis because performing the local attention in a block-wise manner may break the spatial coherency. To solve this, we empirically investigate various solutions, among which we find that employing a wavelet discriminator to examine the spectral discrepancy effectively suppresses the artifacts. Extensive experiments show the superiority over prior transformer-based GANs, especially on high resolutions, e.g., 1024x1024. The StyleSwin, without complex training strategies, excels over StyleGAN on CelebA-HQ 1024, and achieves on-par performance on FFHQ-1024, proving the promise of using transformers for high-resolution image generation. The code and models will be available at https://github.com/microsoft/StyleSwin.

READ FULL TEXT

page 1

page 5

page 6

page 8

page 14

page 15

page 16

page 17

research
02/14/2021

TransGAN: Two Transformers Can Make One Strong GAN

The recent explosive interest on transformers has suggested their potent...
research
11/10/2022

StyleNAT: Giving Each Head a New Perspective

Image generation has been a long sought-after but challenging task, and ...
research
08/11/2022

Deep is a Luxury We Don't Have

Medical images come in high resolutions. A high resolution is vital for ...
research
10/25/2021

STransGAN: An Empirical Study on Transformer in GANs

Transformer becomes prevalent in computer vision, especially for high-le...
research
03/01/2021

Generative Adversarial Transformers

We introduce the GANsformer, a novel and efficient type of transformer, ...
research
10/12/2022

FontTransformer: Few-shot High-resolution Chinese Glyph Image Synthesis via Stacked Transformers

Automatic generation of high-quality Chinese fonts from a few online tra...
research
07/13/2022

DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation

One key challenge of exemplar-guided image generation lies in establishi...

Please sign up or login with your details

Forgot password? Click here to reset