Dual Attention GANs for Semantic Image Synthesis

08/29/2020
by   Hao Tang, et al.
22

In this paper, we focus on the semantic image synthesis task that aims at transferring semantic label maps to photo-realistic images. Existing methods lack effective semantic constraints to preserve the semantic information and ignore the structural correlations in both spatial and channel dimensions, leading to unsatisfactory blurry and artifact-prone results. To address these limitations, we propose a novel Dual Attention GAN (DAGAN) to synthesize photo-realistic and semantically-consistent images with fine details from the input layouts without imposing extra training overhead or modifying the network architectures of existing methods. We also propose two novel modules, i.e., position-wise Spatial Attention Module (SAM) and scale-wise Channel Attention Module (CAM), to capture semantic structure attention in spatial and channel dimensions, respectively. Specifically, SAM selectively correlates the pixels at each position by a spatial attention map, leading to pixels with the same semantic label being related to each other regardless of their spatial distances. Meanwhile, CAM selectively emphasizes the scale-wise features at each channel by a channel attention map, which integrates associated features among all channel maps regardless of their scales. We finally sum the outputs of SAM and CAM to further improve feature representation. Extensive experiments on four challenging datasets show that DAGAN achieves remarkably better results than state-of-the-art methods, while using fewer model parameters. The source code and trained models are available at https://github.com/Ha0Tang/DAGAN.

READ FULL TEXT

page 5

page 11

page 13

page 14

page 15

page 16

page 17

page 18

research
08/29/2021

Layout-to-Image Translation with Double Pooling Generative Adversarial Networks

In this paper, we address the task of layout-to-image translation, which...
research
03/31/2020

Edge Guided GANs with Semantic Preserving for Semantic Image Synthesis

We propose a novel Edge guided Generative Adversarial Network (EdgeGAN) ...
research
07/22/2023

Edge Guided GANs with Multi-Scale Contrastive Learning for Semantic Image Synthesis

We propose a novel ECGAN for the challenging semantic image synthesis ta...
research
04/18/2020

Example-Guided Image Synthesis across Arbitrary Scenes using Masked Spatial-Channel Attention and Self-Supervision

Example-guided image synthesis has recently been attempted to synthesize...
research
11/27/2019

Example-Guided Scene Image Synthesis using Masked Spatial-Channel Attention and Patch-Based Self-Supervision

Example-guided image synthesis has been recently attempted to synthesize...
research
03/24/2023

Efficient Scale-Invariant Generator with Column-Row Entangled Pixel Synthesis

Any-scale image synthesis offers an efficient and scalable solution to s...
research
11/17/2021

DiverGAN: An Efficient and Effective Single-Stage Framework for Diverse Text-to-Image Generation

In this paper, we present an efficient and effective single-stage framew...

Please sign up or login with your details

Forgot password? Click here to reset