Radio Galaxy Morphology Generation Using DNN Autoencoder and Gaussian Mixture Models

06/01/2018 ∙ by Zhixian Ma, et al. ∙ Shanghai Jiao Tong University 0

The morphology of a radio galaxy is highly affected by its central active galactic nuclei (AGN), which is studied to reveal the evolution of the super massive black hole (SMBH). In this work, we propose a morphology generation framework for two typical radio galaxies namely Fanaroff-Riley type-I (FRI) and type-II (FRII) with deep neural network based autoencoder (DNNAE) and Gaussian mixture models (GMMs). The encoder and decoder subnets in the DNNAE are symmetric aside a fully-connected layer namely code layer hosting the extracted feature vectors. By randomly generating the feature vectors later with a three-component Gaussian Mixture models, new FRI or FRII radio galaxy morphologies are simulated. Experiments were demonstrated on real radio galaxy images, where we discussed the length of feature vectors, selection of lost functions, and made comparisons on batch normalization and dropout techniques for training the network. The results suggest a high efficiency and performance of our morphology generation framework. Code is available at: https://github.com/myinxd/dnnae-gmm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The motivation of radio galaxy (RG) morphology generation is twofold. One is to obtain an automatic generator for more radio galaxy samples with known labels. The morphology of a radio galaxy is highly related to its central active galactic nuclei (AGN), which usually hosts a super massive black hole [1, 2]

. For different morphologies, by which the radio galaxies can be classified, the evolution and mechanism of the AGNs are different. In the data release 7 (DR7) release of the FIRST (Faint Images of the Radio Sky at Twenty centimeters) survey at 1.4 GHz 

[3, 4], there are more than 9.4 RGs, yet only are several-thousand clearly classified and labeled manually [5, 6, 7]. The other one is to benefit the foreground removal task on the observations from the Square Kilometer Array (SKA), which aims to uncover what happened after the Big Bang from the redshifted very weak 21 cm HI signal [8]. As a kind of very bright foreground signal, the RGs should be removed from the images so as to detect the target 21 cm signal for further study [9]. Detection and removal of the RGs rely on the morphology study, and it is obvious that the finer understanding of the morphology, the more completely eliminating of the radio galaxies.

For the radio galaxies, there are two typical types namely Fanaroff-Riley type-I (FRI) and type-II (FII), which are with different morphologies [10]. A typical FRI is composed of a bright core and one or two plume-like lobes extending from the core to the edge of the lobes with decaying luminosity, while a FRII is usually with separated hotspots brighter than the core at the ends of the lobes. Wilman et al.has tried to simulate the FR radio galaxy morphology with circular core and two extended elliptical lobes [11], which may assist the theoretic study of radio galaxies, but is not applicable for the foreground removing task on real observations.

To obtain more vivid RGs, a generation model can be designed and trained by existing labeled RG samples. Recently, the generative adversarial network (GAN) was proposed by Goodfellow et al. [12], which consists with two subnets (i.e., the generator and the discriminator) and can generate new samples by the generator  [13, 14]. However, there exists non-convergence problem for training the GAN [15]

, and it cannot generate morphology of specific type only with the randomly generated Gaussian distributed inputs. The autoencoder (AE) is another generation model, which is also composed of two subnets, namely encoder and decoder 

[16]. The encoder subtracts and extracts the features of the samples like encoding, and the decoder reconstructs the samples from the features like decoding and simulates new samples of specific type with randomly generated feature vectors that obey specific distributions (e.g., mixture Gaussian model).

In this work, we propose a radio galaxy morphology generation framework. It takes advantage of the batch normalization (BN) layer for accelerating the network convergence [17] to form a deep neural network based autoencoder (DNNAE, see Fig. 1

). Two three-component Gaussian mixture models are then estimated by the features extracted from the encoder, with which new feature vectors are randomly generated and input into the DNN decoder subnet for simulating new FRI/II radio galaxy morphologies.

This paper is organized as follows. In Sec. II we describe the proposed deep neural network based autoencoder and the training algorithm. In Sec. III the Gaussian mixture model for radio galaxy morphology generation is explained. Experiments are demonstrated and results are discussed in Sec. IV. We conclude in Sec. V with outlooks.

Fig. 1: Proposed DNN based autoencoder radio galaxy morphology generation network. EC and DC represents the layers of the encoder and decoder subnets, respectively.

Ii Deep Neural Network Autoencoder

We illustrate the proposed deep neural network based autoencoder (DNNAE) in Fig. 1 and list the parameters setting in Table I, which is composed of an encoder and a decoder. They are with symmetric structure aside a single layer namely code layer. Both of them are the DNN subnets composing of fully-connected (FC) layers.

As introduced by Ioffe and Szegedy [17], the batch normalization (BN) avoids distribution changing problem between connected layers, so as to accelerate the network training. In addition, there is no need for dropout since BN regularizes the network parameters. Therefore, batch normalization is processed on each FC layers in both of the encoder and decoder in this work. We will discuss and compare the performance of BN and dropout techniques on training the DNNAE networks with experiments in Sec. IV.

To train the parameters in the DNNAE network, a loss function (or cost function) should be defined. Since our target is to generate the morphology of radio galaxies belonging to specific types (i.e., FRI or FRII), we focus on the features that contains singularities between the two types, and the similarities as well.

As for the autoencoder, It usually applies the mean squared error (MSE) between the input of the encoder and the output of the decoder as the objective to be minimized [18, 19]. By optimizing such object, the network tend to extract features both of the two RG types have. Another cost function namely cross-entropy (CE) is widely applied in classification tasks [18, 20], which is minimized for extracting the most distinguishable features between the samples of different types. The MSE and CE loss functions are defined as,

(1)
(2)

where is the mean squared error between the reconstructed images and original input images . is the number of images in one batch, and and are the number of rows and columns of the images. is the cross entropy loss. represents number of types and is set as two. is the one-hot real label of the th RG sample ( if the source belongs to type , and if otherwise).

is the normalized probability of this RG being classified as type

in a certain batch.

To train our morphology generation network, a simple but efficient way is to make a combination of them as the objective to be optimzed, i.e.,

(3)

where is the combined loss, and E and D represent the encoder and decoder. In this work, parameters in the DNNAE network are trained with both the MSE and CE losses alternatively, where the CE loss is back-propagated with gradients to parameters in the encoder subnet, and the MSE loss is to the whole network. (see Alg. 1 for details).

Subnet layer structure AF BN

 

Encoder Input
EC1 2048 ReLU Y
EC2 1024 ReLU Y
EC3 1024 ReLU Y

 

Code 256 ReLU N

 

Decoder DC1 1024 ReLU Y
DC2 1024 ReLU Y
DC3 2048 ReLU Y
Output Sigmoid
TABLE I:

Parameters setting for the proposed DNN based autoencoder network. AF represents the active function and BN means batch normalization. Y and N are the flags for whether a batch normalization is appended to this layer.

Some popular techniques are also applied for training the proposed DNNAE network. The rectified linear unit (ReLU 

[21]

) is applied as the activation function after each fully-connected layers, and we apply the adaptive moment optimization function (ADAM; 

[22]) to adjust the parameters with exponentially decaying learning rates.

1:  Input: samples and labels
2:  Input: and
3:  batches = length(labels) / batchsize
4:  for  do
5:     for  do
6:         
7:         
8:         Feedforward and to obtain and
9:         Backpropogate to parameters in the encoder subnet
10:         Backpropogate to parameters in the whole net
11:     end for
12:  end for
Algorithm 1 DNNAE-MSE+CE training algorithm.

Iii Generation Algorithm

By feeding randomly generated feature codes, which obey the distribution of the features extracted from the real radio galaxy samples, into the decoder subnet of the DNNAE can it output simulated new radio galaxy images. We deploy a three-component Gaussian mixture models (GMM) to fit the distributions of the extracted FRI or FRII radio galaxy features, in which one component is for the similarities between the FRI/II types, and the others are for the singularities of them.

Denote as the feature vector (i.e., the code) and is the length of the features. Then the three-component GMM is,

(4)

where represents the probability that the feature vector is generated from this GMM. are the coefficients and are the parameters of the corresponding Gaussian models. is the number of components, which is set as three in this work, where two components are for the singularities of the FRI and FRII morphologies and the rest one for the similarities of them. is the th Gaussian component that is defined as,

(5)

For each type of the RGs, a GMM is constructed and estimated to obtain corresponding

, which is later used to randomly generate new specific RG images. The expectation maximization algorithm is used to estimate the GMM parameters 

[23, 24].

Iv Experiments and results

To evaluate the performance of our proposed DNNAE network as well as the GMM based generation algorithm, we demonstrate experiments, and make discussions on the results in this section.

Fig. 2: Experimental results and comparisons. (a) are the averaged test losses of networks with different code feature lengths, loss functions, and dropout (DO) or batch-normalization (BN) techniques. (b) are the within-class test losses on FRI/II RGs with networks of different loss functions at variant code lengths. (c) are the training and validation loss curves of networks with fixed MSE+CE loss functions and 256 code length while with dropout or batch-normalization techniques.
Fig. 3: Reconstructed radio galaxy images by DNNAE-MSE-BN, DNNAE-MSE+CE-DO, and DNNAE-MSE+CE-BN networks, the code length are 256 in all of them. Left and right five columns are FRIs and FRIIs, respectively

Iv-a Data preparation

Real radio galaxy images were selected from two catalogs, i.e., the FRICAT [5] and the FRIICAT [6], to form the samples (192 FRIs and 99 FRIIs) for training the network, and were retrieved from the FIRST data archive111FIRST image cutouts: https://third.ucllnl.org/cgi-bin/firstcutout.

Before fed into the DNNAE network, the original images were preprocessed by three steps. First, the noise was suppressed using the sigma clipping algorithm [7] to improve the contrast of the radio galaxy morphologies. Second, the center region of 4040 pixels was cropped from the 150

150 pixels image each. The last step was applying data augmentation to enlarge the sample numbers for avoiding overfitting and achieving a balanced training set. In this work, the cropped sample images were augmented by flipping (left-to-right, up-to-bottom, or diagonal) and rotated with uniformly distributed angle

() .

The 291 radio galaxy samples were randomly divided into training, validation and test subsets with a ratio of 64% : 16% : 20% before augmentation. The FRI samples were 200 times augmented each (i.e., 24,600 for training and 6,200 for validation), and the FRIIs were 400 times augmented each (i.e., 23,200 for training and 6,400 for validation). Note that the test samples (38 FRIs and 20 FRIIs) were not augmented.

Iv-B Experiments and comparisons

The proposed DNNAE was constructed as Fig. 1 illustrated, where length of the feature vectors varied for discussions. Since the selection of loss function affects the performance of the network, and to compare our MSE+CE loss function strategy, a group of networks with only MSE loss was also formed. We did not train a DNNAE with cross-entropy loss function, since this loss is only back-propagated to the encoder subnet instead of the whole net. In addition, two networks that apply the dropout technique without batch normalization were also constructed for comparison. To be more intuitive, we name the four networks as DNNAE-MSE+CE-BN, DNNAE-MSE-BN, DNNAE-MSE+CE-DO, and DNNAE-MSE-DO, where BN and DO are the abbreviations of batch normalization and dropout.

All the networks were batchly trained with the training and validation subsets above during 200 epochs, where the batch’s size was set as 100. The exponentially decaying learning rate for parameters optimizing was initialized as 0.001 and varied by a decaying rate of 0.95. For the dropout applied networks, the keep probability was 0.5.

Experimental results and comparisons of the four networks are illustrated in Fig. 2. In Fig 2(a) shows the MSE losses of the test subset by the DNNAE-MSE+CE-BN, DNNAE-MSE-BN, DNNAE-MSE+CE-DO, and DNNAE-MSE-DO networks with variant feature vector length at the code layer, where the error bars are under a 95% confidence level. In Fig. 2(b) we illustrate the within-class test loss of FRI/II RGs with the DNNAE-MSE+CE-BN and DNNAE-MSE-BN networks. And in Fig 2(c) the training and validation loss during 200 epochs between DNNAE-MSE+CE applying BN and dropout are compared, where the code length is fixed as 256.

In addition, ten FRI and FRII test samples were randomly selected to evaluate the reconstruction performance of the DNNAE-MSE-BN, DNNAE-MSE+CE-DO, and DNNAE-MSE( see Fig. 3).

From the experimental results, we summarize and discuss that,

  • In general, the proposed DNNAE network can reconstruct FRI/II radio galaxies with low reconstruction error and high efficiency.

  • For all the networks, the testing loss tend to converge as the code feature length increases. Especially from Fig. 2(b), the losses of FRI/II converge at code length of 256. That’s why we select 256 as the code length.

  • From Fig. 2(a) and (b), it can be found that the combination of MSE and CE loss functions achieves better performance than the case only applying MSE loss.

  • From Fig. 2(c), it is obvious that batch normalization accelerates the convergence of network parameters. In addition, BN can avoid the saturation problem while the parameter space enlarges, see the curves of DNNAE-MSE+CE-DO and DNNAE-CE-DO in Fig. 2(a).

  • The test losses of FRIs are lower than the FRIIs. In our view, it is because the FRII morphologies are more complicated, and the shortage of FRII samples may also be a problem.

  • For the reconstructed images by the three networks, the DNNAE-MSE+CE-BN and DNNAE-MSE-BN achieve similar performance in visual, but the net of MSE+CE is better at some RGs on finer substructures, e.g. the FRIs at column one and four, and the FRII at column nine. For the DNNAE-MSE+CE with dropout, it could not reconstruct some RGs, e.g., the FRI at column five and the FRII at column ten.

Iv-C Sample generation

On the DNNAE-MSE+CE-BN net, we simulated images of FRI/II radio galaxy morphology using the three-component GMMs described in Sec. III. Two GMMs were estimated from the features extracted by the network of the training and validation samples for the two RG types, respectively. The generated images were displayed in Fig. 4, which achieved distinguishable morphologies between the FRI and FRII radio galaxies and were correctly classified by all of the authors in visual.

Fig. 4: Generated FRI (top row) and FRII (bottom row) radio galaxy morphologies by the DNNAE-MSE+CE-BN-256 and the three-component GMMs.

V Conclusion

A deep neural network based autoencoder is proposed to generate radio galaxy morphologies, which combines the mean squared error (MSE) and cross entropy (CE) loss functions and applies the batch normalization training technique. To simulate specific FRIs and FRIIs, three-component Gaussian mixture models are estimated to randomly generate feature vectors that are fed into the decoder subnet to output new radio galaxy morphology samples.

Results of the experiments suggest that reconstruction loss of the network converges when the feature vector length increases. Compared with the network with only MSE loss, our MSE+CE combination strategy achieved better performance. The batch normalization technique made the network’s parameters converge faster and achieved significant low reconstruction error. The extracted features by the DNNAE network could be well described by the Gaussian mixture models, for both the similarities and singularities of the RGs with different morphologies.

In the future, we will add more types of radio galaxies with complicated morphologies to train a more general generator model.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (grant Nos. 61371147 and 11433002), and the National Key Research and Discovery Plan (grant No.2017YFF0210903).

References

  • [1] A. C. Fabian, “Observational evidence of active galactic nuclei feedback,”Annual Review of Astronomy and Astrophysics, vol. 50, pp. 455–489, Sep. 2012.
  • [2] P. Padovani, D. M. Alexander, R. J. Assef, B. De Marco, P. Giommi, R. C. Hickox, and et al., “Active galactic nuclei: what’s in a name?” Astron. Astrophys. Rev., vol. 25, p. 2, Jul. 2017.
  • [3] R. H. Becker, R. L. White, and D. J. Helfand, “The FIRST survey: faint images of the radio sky at twenty centimeters,” Astrophys. J., vol. 450, p. 559, Sep. 1995.
  • [4] P. N. Best and T. M. Heckman, “On the fundamental dichotomy in the local radio-AGN population: accretion, evolution and host galaxy properties,” Mon. Not. R. Astron. Soc., vol. 421, pp. 1569–1582, Apr. 2012.
  • [5] A. Capetti, F. Massaro, and R. D. Baldi, “FRICAT: A FIRST catalog of FR I radio galaxies,” Astron. Astrophys., vol. 598, p. A49, Feb. 2017.
  • [6] A. Capetti, F. Massaro, and R. D. Baldi, “FRIICAT: A FIRST catalog of FR II radio galaxies,” Astron. Astrophys., vol. 601, p. A81, May 2017.
  • [7]

    A. K. Aniyan and K. Thorat, “Classifying radio galaxies with the convolutional neural network,” Astrophys. J. Suppl. Ser., vol. 230, p. 20, Jun. 2017.

  • [8] L. Koopmans, J. Pritchard, G. Mellema, J. Aguirre, K. Ahn, R. Barkana, and et al., “The cosmic dawn and epoch of reionisation with SKA,” Advancing Astrophysics with the Square Kilometre Array (AASKA14), p. 1, Apr. 2015.
  • [9] E. Chapman, S. Zaroubi, F. B. Abdalla, F. Dulwich, V. Jelić, and B. Mort, “The effect of foreground mitigation strategy on EoR window recovery,” Mon. Not. R. Astron. Soc., vol. 458, pp. 2928–2939, May 2016.
  • [10] B. L. Fanaroff and J. M. Riley, “The morphology of extragalactic radio sources of high and low luminosity,” Mon. Not. R. Astron. Soc., vol. 167, pp. 31P–36P, May 1974.
  • [11] R. J. Wilman, L. Miller, M. J. Jarvis, T. Mauch, F. Levrier, F. B. Abdalla, and et al., “A semi-empirical simulation of the extragalactic radio continuum sky for next generation radio telescopes,” Mon. Not. R. Astron. Soc., vol. 388, pp. 1335–1348, Aug. 2008.
  • [12] I. Goodfellow, J. Pougetabadie, M. Mirza, B. Xu, D. Wardefarley, S. Ozair, and et al., “Generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 3, pp. 2672–2680, 2014.
  • [13] F. Tom and D. Sheet, “Simulating patho-realistic ultrasound images using deep generative networks with adversarial learning,” ArXiv e-prints, 2017.
  • [14] H. Kwon, Y. Kim, H. Yoon, and D. Choi, “Captcha image generation systems using generative adversarial networks,” IEICE Transactions on Infomation and Systems, vol. 101, no. 2, pp. 543–546, 2018.
  • [15] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and et al., “Improved techniques for training GANs,” in Advances in Neural Information Processing Systems, 2016, pp. 2234–2242.
  • [16]

    I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016.

  • [17]

    S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 448–456.

  • [18]

    C. Bishop, Pattern recognition and machine learning (information science and statistics). Springer-Verlag New York, Inc., 2006.

  • [19]

    J. Zhang, S. Shan, M. Kan, and X. Chen, “Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment,” in European Conference on Computer Vision. Zurich, Switzerland: Springer, Sep. 2014, pp. 1–16.

  • [20] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy layer-wise training of deep networks,” in Advances in neural information processing systems, Vancouver, Canada, Dec. 2007, pp. 153–160.
  • [21] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” Journal of Machine Learning Research, vol. 15, no. 106, p. 275, 2011.
  • [22] D. P. Kingma and J. Ba, “ADAM: a method for stochastic optimization,” ArXiv e-prints, Dec. 2014.
  • [23]

    Z. Ma, Y. Liang, and J. Zhu, “An optic-fiber fence intrusion recognition system using mixture gaussian hidden markov models,” IEICE Electronics Express, vol. 14, no. 5, p. 20170023, 2017.

  • [24] J. Bilmes, “A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models,” International Computer Science Institute, vol. 4, no. 510, p. 126, 1998.