-
Edge Guided GANs with Semantic Preserving for Semantic Image Synthesis
We propose a novel Edge guided Generative Adversarial Network (EdgeGAN) ...
read it
-
Dual Attention Network for Scene Segmentation
In this paper, we address the scene segmentation task by capturing rich ...
read it
-
Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
Cross-view image translation is challenging because it involves images w...
read it
-
Example-Guided Image Synthesis across Arbitrary Scenes using Masked Spatial-Channel Attention and Self-Supervision
Example-guided image synthesis has recently been attempted to synthesize...
read it
-
Example-Guided Scene Image Synthesis using Masked Spatial-Channel Attention and Patch-Based Self-Supervision
Example-guided image synthesis has been recently attempted to synthesize...
read it
-
Attention Cube Network for Image Restoration
Recently, deep convolutional neural network (CNN) have been widely used ...
read it
-
Semantic Image Synthesis via Efficient Class-Adaptive Normalization
Spatially-adaptive normalization (SPADE) is remarkably successful recent...
read it
Dual Attention GANs for Semantic Image Synthesis
In this paper, we focus on the semantic image synthesis task that aims at transferring semantic label maps to photo-realistic images. Existing methods lack effective semantic constraints to preserve the semantic information and ignore the structural correlations in both spatial and channel dimensions, leading to unsatisfactory blurry and artifact-prone results. To address these limitations, we propose a novel Dual Attention GAN (DAGAN) to synthesize photo-realistic and semantically-consistent images with fine details from the input layouts without imposing extra training overhead or modifying the network architectures of existing methods. We also propose two novel modules, i.e., position-wise Spatial Attention Module (SAM) and scale-wise Channel Attention Module (CAM), to capture semantic structure attention in spatial and channel dimensions, respectively. Specifically, SAM selectively correlates the pixels at each position by a spatial attention map, leading to pixels with the same semantic label being related to each other regardless of their spatial distances. Meanwhile, CAM selectively emphasizes the scale-wise features at each channel by a channel attention map, which integrates associated features among all channel maps regardless of their scales. We finally sum the outputs of SAM and CAM to further improve feature representation. Extensive experiments on four challenging datasets show that DAGAN achieves remarkably better results than state-of-the-art methods, while using fewer model parameters. The source code and trained models are available at https://github.com/Ha0Tang/DAGAN.
READ FULL TEXT
Comments
There are no comments yet.