Interpreting Galaxy Deblender GAN from the Discriminator's Perspective

by   Heyi Li, et al.

Generative adversarial networks (GANs) are well known for their unsupervised learning capabilities. A recent success in the field of astronomy is deblending two overlapping galaxy images via a branched GAN model. However, it remains a significant challenge to comprehend how the network works, which is particularly difficult for non-expert users. This research focuses on behaviors of one of the network's major components, the Discriminator, which plays a vital role but is often overlooked, Specifically, we enhance the Layer-wise Relevance Propagation (LRP) scheme to generate a heatmap-based visualization. We call this technique Polarized-LRP and it consists of two parts i.e. positive contribution heatmaps for ground truth images and negative contribution heatmaps for generated images. Using the Galaxy Zoo dataset we demonstrate that our method clearly reveals attention areas of the Discriminator when differentiating generated galaxy images from ground truth images. To connect the Discriminator's impact on the Generator, we visualize the gradual changes of the Generator across the training process. An interesting result we have achieved there is the detection of a problematic data augmentation procedure that would else have remained hidden. We find that our proposed method serves as a useful visual analytical tool for a deeper understanding of GAN models.


page 3

page 4


Capsule GAN Using Capsule Network for Generator Architecture

This paper presents Capsule GAN, a Generative adversarial network using ...

Comparing Generative Adversarial Network Techniques for Image Creation and Modification

Generative adversarial networks (GANs) have demonstrated to be successfu...

YuruGAN: Yuru-Chara Mascot Generator Using Generative Adversarial Networks With Clustering Small Dataset

A yuru-chara is a mascot character created by local governments and comp...

LogicGAN: Logic-guided Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a revolutionary class of Deep...

MSGDD-cGAN: Multi-Scale Gradients Dual Discriminator Conditional Generative Adversarial Network

Conditional Generative Adversarial Networks (cGANs) have been used in ma...

Attention-Aware Generative Adversarial Networks (ATA-GANs)

In this work, we present a novel approach for training Generative Advers...

DeepFake Detection by Analyzing Convolutional Traces

The Deepfake phenomenon has become very popular nowadays thanks to the p...

1 Introduction

Astronomical researchers routinely assume the strict isolation of the targeted celestial body and so their objective is simplified into evaluating the properties of a single object. However, galactic overlapping is ubiquitous in current surveys due to projection effects and source interactions. This introduces bias to multiple physical traits such as photometric redshifts and weak lensing at levels beyond requirements. With the arrival of the next generation of ground-based galaxy surveys such as the Large Synoptic Survey Telescope (LSST) [9] which is expected to begin operation in 2023, this issue becomes more urgent. Specifically, the increase of both depth and sensitivity will cause the number of blended galaxy images to grow exponentially. Dawson [5] predicts that nearly of galaxies captured in LSST images encounter overlapping with a

center-to-center distance. This leads to immense quantities of imaging data warranted as unusable. According to the estimation in

[16], up to Million galaxy images could be discarded each year if the blending issue is not effectively addressed throughout the ten year period of the LSST survey. However, the task of galaxy deblending remains a open problem in the field of astronomy and no gold standard solution exists in the processing pipeline.

Since Goodfellow [7]

first proposed the generative adversarial network (GAN) model, it has achieved state-of-the-art performance in many computer vision applications, especially in face generation

[4, 1, 10] and image manipulation [8, 19]. Many GAN variants [2, 11] have been proposed to improve the training stability and boost the convergence speed. Recently, the GAN model [16] has been applied in solving the galaxy deblending problem and has yielded promising results in the conspicuous (confirmed blends) class of blends. However, a conceptual understanding of GAN models is still largely lacking, which makes building and training GAN models extremely demanding for non-expert users in the astronomy society. This prohibits wide utilization of GAN models and potentially prevents GAN models from reaching optimum performance. More importantly, the lack of interpretation directly results in the lack of trust in images generated by GAN models.

During our discussions with domain scientists, we noticed two facts: (1) a visual explanation can help them understand model behaviour without machine learning expertise, and (2) the behavior of the Discriminator is most perplexing to astronomers. Different visualization algorithms have been proposed to increase the interpretability of convolutional neural networks (CNNs). Class activation mapping (CAM) based methods

[18, 17] directly use the activation of the last convolutional layer to infer the downsampled relevance of the input pixels. But such methods are only applicable to specific architectures which use the average pooling layer. The layer-wise relevance propagation (LRP) algorithm [15]

is proposed to address this issue. For each image, LRP propagates the classification score backward through the model and calculates relevance intensities over all pixels. Although successful in interpreting discriminative classifiers

[12], the LRP algorithm does not cover network structures like GAN models. Among the limited works explaining generative network models, Liu [14]

designed a GUI interface to display connections between neurons of neighboring layers in a GAN model. Unfortunately, their tool is intended only for machine learning experts and hence not supportive for non-domain researchers. Most recently, Bau

[3] came up with a dissecting framework which examines the causal relationship between network units and object concepts. Likewise, the relationship between face aging and model parameters is examined in [6] but the Discriminator is completely omitted in their works. Although not used to generate images during the inference stage, the Discriminator significantly affects the performance of the Generator, which is important to investigate.

Therefore, we propose to extend the LRP framework to bridge this gap. Our Polarized-LRP propagates the single probability value given by the Discriminator backwards, during which it calculates positive contributions for ground truth images and negative contributions for generated images separately. By comparing relevance maps for the same input at different iterations during the training stage, our method clearly reveals the gradual changes of the Generator in response to the direct feedback from the Discriminator. Moreover, we demonstrate the effectiveness of our method by uncovering a problematic step in data augmentation which was previously unknown to astronomers. To the best of our knowledge, our Polarized-LRP is the first method in the literature which can effectively visualize the behavior of the Discriminator and its impact on the Generator.

The remainder of this paper is organized as follows. A thorough explanation of our new visualization scheme is included in Section 2. Extensive experiments are conducted to prove the effectiveness of our method, which are described in Section 3. We present our conclusions and discussions of future work in Section 4.

2 Our Visualization Method

2.1 Polarized-LRP

As mentioned in Section 1

, the LRP algorithm applies only to discriminative models. The root of this limitation lies in the structure of the relevance input. The classifier’s output consists of the predicted probability for each class. All elements in this vector are adjusted to zero except the highest one which represents the target class. This altered vector then serves as the relevance input. In this way, only neurons connected to this element are activated during the propagation. The Discriminator, on the other hand, only returns one probability value. Applying the LRP algorithm directly is essentially assuming that there is only one class, which renders all output heatmaps meaningless.

To address this issue, our Polarized-LRP computes two relevance maps from the same probability value, one positive and one negative. The positive relevance map displays only positive contributions from pixels in the input image to the probability value, while the negative relevance map shows only negative contributions. If one image is classified as fake by the Discriminator, then the negative contributions from input pixels dominate and thus decrease the probability score. In this case, the negative relevance map represents and conveys the decision making of the Discriminator. The opposite is true if the image is classified as real. By polarizing the relevance into positive and negative, our algorithm creates two ”virtual” classes from the Discriminator’s output probability.

Our Polarized-LRP follows the conservation rule which guarantees the summation of relevance propagated through all neurons in each layer of the model to remain constant.


The conservation rule also assures that the inflow of relevance to one neuron matches the outflow of relevance from that same neuron, which is captured in Equation 2. Neurons such as and in layer are connected to neuron which in turn is connected to neurons such as and in layer .


Here represents the contribution from neuron at layer to neuron at layer . It is computed using the propagation procedure in Equation 3.


The weights and biases are denoted by and respectively. and represent value truncation at zero.

2.2 Demonstration Examples

Here we present two cases as examples of demonstration for our proposed Polarized-LRP. The first row in Figure 1 shows the positive relevance map for a ground truth image. From the map on the right, we can see that the Discriminator focuses on the interior of the galaxy ellipse. Pixels in the attention area make strong positive contributions to the probability score, which explains why the Discriminator classifies this image as real. The second row in Figure 1 exhibits the negative relevance map for a generated image. The map indicates that the Discriminator makes its decision based on pixels encircling the central area, which is the most noticeable area in the image. This is consistent with the visual inspection between the real image and the generated image by a domain expert.

(a) ground truth image (b) positive relevance map (c) generated image (d) negative relevance map
Figure 1: The first row includes an example of the positive relevance map and the second row contains an example of the negative relevance map. Image are enlarged to for better visualization.

3 Experiments and Discussions

To explain the role of the Discriminator to astronomers, we first select one successful training instance of the galaxy deblender GAN model and compare relevance maps of the same input at different training iterations. Next, during our analysis of failed training attempts, we discover an unusual pattern which leads to the successful diagnosis of an erroneous data augmentation procedure.

3.1 Galaxy Deblender GAN Model Replication

Since no pre-trained network is publicly available, we re-train the galaxy deblender GAN from scratch. All steps mentioned in [16] are strictly followed. Training and testing datasets are re-generated using the raw galaxy images in the Kaggle Galaxy Zoo classification challenge [13]. The GAN model is trained using the same learning rate setting and is updated for the same number of iterations. One single Tesla V100 graphics card was used for the training. Table 1 shows the reported peak noise-to-signal ratio (PSNR) value and structural similarity index (SSIM) value along with ours. Although our values are slightly lower than their reported ones, they are within a reasonable shift range.

Reported 34.61 0.92
Replicated 33.47 0.89
Table 1: Mean values of PSNR and SSIM
(a) relevance map (b) early stage (c) ground truth (d) relevance map (e) middle stage (f) ground truth (g) relevance map (h) final stage (i) ground truth
Figure 2: Each row represents one different stage during training. Images are enlarged to for better visualization.

3.2 Training Understanding

An example of a high-quality generation is shown in Figure 2. Three iterations are selected correspondingly at an early stage, at an approximate mid-point, and near the end when the model converges. For the three generated images, the Discriminator gave a score of , , . We plot their relevance maps as to indicate why they are identified as fake. From the central image in the first row, we can see that the Generator only manages to replicate the inner bright spot. In the Discriminator’s relevance map, this corresponds to the small black hole in the middle. Furthermore, our relevance map on the left clearly reveals that the low probability score by the Discriminator is mostly due to the unrealistic-looking pixels in the surrounding areas. This information is then passed on to the Generator as the adversarial loss penalty. As is shown in the images in the subsequent rows, the white ring in our relevance map grows thinner and darker. Along with the expansion of the black hole (meaning the confidence area in the center of galaxy enlarges), the generated image slowly transforms towards the ground truth image. One interesting finding is that the Generator learns the glaring spot first and then incrementally apprehends the surroundings. This is comprehensible because the Generator is first trained with only the style loss from VGG19. During this so-called ”burn-in” period, features such as salient spots are expected to be grasped by the Generator. The benefit of the style loss during the ”burn-in” period is easily visualized in our relevance map. How the Generator changes during training under the guidance of the Discriminator is also revealed.

(a) relevance map (b) ground truth (c) contrast increased
Figure 3: The ”phantom boundary” becomes apparent when the contrast of the ground truth image is increased. Images are enlarged to for better visualization.

3.3 Model Debugging

While analyzing the galaxy deblender GAN using our method, we noticed a strange phenomenon consistently appearing in the positive relevance maps. Consider Figure 3 where the boundary of a rectangular shape is shown in the positive relevance map of the ground truth image. This shape only appears in the positive relevance maps which is an important revelation as it tells us that the Discriminator picks up features from the ground truth images which are hidden from our sight. A partial decision was mistakenly made from the image background without any footing in domain knowledge.

Further investigation revealed that this ”phantom boundary” was introduced in the data preparation stage. One image out of each blended pair was randomly perturbed by flipping, rotation, displacement, and scaling. Then, after these operations, all missing pixels in the newly created image were filled with zeros. However, this padded true black background diverges from the near black background of the galaxy although the two seem quite alike with visual inspection. Figure

4 shows the histogram of two different regions in the ground truth image, one from the galaxy background and the other from the manually padded background.

Figure 4: Two background regions randomly selected from the ground truth image. While they look alike visually, these two regions have very different histograms. The ground truth image was enlarged () for better visualization.

This problem had a large impact as it crippled our galaxy deblender GAN model from reaching its optimum performance. Instead of capturing features of real celestial bodies, the Discriminator learned a much simpler strategy to manipulate the equilibrium system utilizing the ”phantom boundary”. No matter how realistic the generated images were by both visual inspection and SSIM metrics, the Discriminator could easily differentiate them as long as the ”phantom boundary” was absent. While zero-padding is a frequently used technique in image processing, many non-domain experts are unaware of its shortcoming. Fortunately, with the help of our proposed algorithm this problems could be detected. It was eventually resolved by replacing the zero-padding with a random noise distribution obtained from physics statistics.

4 Conclusion

Despite the many successes of GAN models they are still difficult to understand. We propose a Polarized-LRP technique that allows interpretation of the galaxy deblender GAN from the perspective of the Discriminator. By visualizing positive and negative contributions separately, our algorithm successfully reveals how the Discriminator detects generated images and how it affects the Generator during training. Our method was immediately useful in uncovering a hidden mistake in the data preparation stage of our collaborators’ galaxy images. Although designed for the galaxy deblending problem, Polarized-LRP is not restricted to this network. We plan to apply our method to interpret well-established GAN models such as StyleGAN and others. We also plan to extend LRP to the Generator as well to provide a complete understanding of both GAN components. Finally, we will design a visual analytics system to also facilitate direct user interaction.


  • [1] G. Antipov, M. Baccouche, and J. Dugelay (2017) Face aging with conditional generative adversarial networks. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 2089–2093. Cited by: §1.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. Cited by: §1.
  • [3] D. Bau, J. Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Freeman, and A. Torralba (2019) Gan dissection: visualizing and understanding generative adversarial networks. In International Conference on Learning Representations, Cited by: §1.
  • [4] Y. Chen, W. Chen, C. Wei, and Y. F. Wang (2017) Occlusion-aware face inpainting via generative adversarial networks. In 2017 IEEE International Conference on Image Processing (ICIP), pp. 1202–1206. Cited by: §1.
  • [5] W. A. Dawson, M. D. Schneider, J. A. Tyson, and M. J. Jee (2015) The ellipticity distribution of ambiguously blended objects. The Astrophysical Journal 816 (1), pp. 11. Cited by: §1.
  • [6] A. Genovese, V. Piuri, and F. Scotti (2019) Towards explainable face aging with generative adversarial networks. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 3806–3810. Cited by: §1.
  • [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • [8] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 1125–1134. Cited by: §1.
  • [9] Ž. Ivezić, S. M. Kahn, J. A. Tyson, B. Abel, E. Acosta, R. Allsman, D. Alonso, Y. AlSayyad, S. F. Anderson, J. Andrew, et al. (2019) LSST: from science drivers to reference design and anticipated data products. The Astrophysical Journal 873 (2), pp. 111. Cited by: §1.
  • [10] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §1.
  • [11] C. Li, Z. Wang, and H. Qi (2018) Fast-converging conditional generative adversarial networks for image synthesis. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2132–2136. Cited by: §1.
  • [12] H. Li, Y. Tian, K. Mueller, and X. Chen (2019) Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation. Image and Vision Computing 83, pp. 70–86. Cited by: §1.
  • [13] C. Lintott, K. Schawinski, S. Bamford, A. Slosar, K. Land, D. Thomas, E. Edmondson, K. Masters, R. C. Nichol, M. J. Raddick, et al. (2010) Galaxy zoo 1: data release of morphological classifications for nearly 900 000 galaxies. Monthly Notices of the Royal Astronomical Society 410 (1), pp. 166–178. Cited by: §3.1.
  • [14] M. Liu, J. Shi, K. Cao, J. Zhu, and S. Liu (2017) Analyzing the training processes of deep generative models. IEEE transactions on visualization and computer graphics 24 (1), pp. 77–87. Cited by: §1.
  • [15] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K. Müller (2019) Layer-wise relevance propagation: an overview. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 193–209. Cited by: §1.
  • [16] D. M. Reiman and B. E. Göhre (2019) Deblending galaxy superpositions with branched generative adversarial networks. Monthly Notices of the Royal Astronomical Society 485 (2), pp. 2617–2627. Cited by: §1, §1, §3.1.
  • [17] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §1.
  • [18] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §1.
  • [19] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §1.