StyleNAT: Giving Each Head a New Perspective

11/10/2022
by   Steven Walton, et al.
0

Image generation has been a long sought-after but challenging task, and performing the generation task in an efficient manner is similarly difficult. Often researchers attempt to create a "one size fits all" generator, where there are few differences in the parameter space for drastically different datasets. Herein, we present a new transformer-based framework, dubbed StyleNAT, targeting high-quality image generation with superior efficiency and flexibility. At the core of our model, is a carefully designed framework that partitions attention heads to capture local and global information, which is achieved through using Neighborhood Attention (NA). With different heads able to pay attention to varying receptive fields, the model is able to better combine this information, and adapt, in a highly flexible manner, to the data at hand. StyleNAT attains a new SOTA FID score on FFHQ-256 with 2.046, beating prior arts with convolutional models such as StyleGAN-XL and transformers such as HIT and StyleSwin, and a new transformer SOTA on FFHQ-1024 with an FID score of 4.174. These results show a 6.4 compared to StyleGAN-XL with a 28 56 https://github.com/SHI-Labs/StyleNAT .

READ FULL TEXT

page 1

page 3

page 6

page 8

page 12

research
12/20/2021

StyleSwin: Transformer-based GAN for High-resolution Image Generation

Despite the tantalizing success in a broad of vision tasks, transformers...
research
09/29/2022

Dilated Neighborhood Attention Transformer

Transformers are quickly becoming one of the most heavily applied deep l...
research
02/28/2022

Local and Global GANs with Semantic-Aware Upsampling for Image Generation

In this paper, we address the task of semantic-guided image generation. ...
research
08/27/2020

Deep Spatial Transformation for Pose-Guided Person Image Generation and Animation

Pose-guided person image generation and animation aim to transform a sou...
research
07/13/2022

DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation

One key challenge of exemplar-guided image generation lies in establishi...
research
02/16/2023

TcGAN: Semantic-Aware and Structure-Preserved GANs with Individual Vision Transformer for Fast Arbitrary One-Shot Image Generation

One-shot image generation (OSG) with generative adversarial networks tha...

Please sign up or login with your details

Forgot password? Click here to reset