Modular Deep Learning Analysis of Galaxy-Scale Strong Lensing Images

11/10/2019 ∙ by Sandeep Madireddy, et al. ∙ The University of Nottingham Argonne National Laboratory 0

Strong gravitational lensing of astrophysical sources by foreground galaxies is a powerful cosmological tool. While such lens systems are relatively rare in the Universe, the number of detectable galaxy-scale strong lenses is expected to grow dramatically with next-generation optical surveys, numbering in the hundreds of thousands, out of tens of billions of candidate images. Automated and efficient approaches will be necessary in order to find and analyze these strong lens systems. To this end, we implement a novel, modular, end-to-end deep learning pipeline for denoising, deblending, searching, and modeling galaxy-galaxy strong lenses (GGSLs). To train and quantify the performance of our pipeline, we create a dataset of 1 million synthetic strong lensing images using state-of-the-art simulations for next-generation sky surveys. When these pretrained modules were used as a pipeline for inference, we found that the classification (searching GGSL) accuracy improved significantly—from 82 the baseline to 90 25

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Gravitational lensing is the deflection of light rays as they traverse through the curved space caused by the presence of mass. In the present era of precision cosmology, gravitational lensing has become a powerful probe in many areas of astrophysics and cosmology, from stellar scale to cosmological scale. Galaxy-galaxy strong lensing (GGSL) is a particular case of gravitational lensing in which the background source and foreground lens are both galaxies and the lensing system is sufficient to distort images of sources into arcs or even rings, depending on the relative angular position of the two objects. Since the discovery of the first GGSL system in 1988 Hewitt et al. (1988), many valuable scientific applications have been realized, such as studying galaxy mass density profiles Sonnenfeld et al. (2015); Shu et al. (2016); Küng et al. (2018), detecting and inferring galaxy substructure Vegetti et al. (2014); Hezaveh et al. (20may); Bayer et al. (2018); Brehmer et al. (2019), measuring cosmological parameters Collett and Auger (2014); Rana et al. (2017); Suyu et al. (2017), investigating the nature of high-redshift galaxies Bayliss et al. (2017); Dye et al. (2018); Sharda et al. (2018), and constraining the properties of self-interacting dark matter candidates Shu et al. (2016); Gilman et al. (2017); Kummer et al. (2018)

With the capabilities of the next-generation telescopes such as the Large Synoptic Survey Telescope (LSST),111https://www.lsst.org/, Euclid222https://www.euclid-ec.org/ the number of known GGSLs is predicted to increase by several orders of magnitude Collett (2015). The strong gravitational lens finding challenge project Metcalf et al. (2019)

proved the success of applying machine learning approaches to detect GGSL systems in an automated manner. Lanusse et al. 

Lanusse et al. (2018), Morningstar et al. Morningstar et al. (2018), Hezaveh et al. Hezaveh et al. (2017), Levasseur et al. Levasseur et al. (2017) and Pearson et al. Pearson et al. (2019) have shown the feasibility and reliability of utilizing deep learning to model strong lenses as a vastly more efficient alternative to traditional parametric methods. Fast forward modeling for strong lensing image reconstructions Morningstar et al. (2019)

may also be combined with inference pipelines such as Markov chain Monte Carlo for lensing parameter estimation. However, the preprocessing of the original images—for example, deblending and denoising with machine learning—is still in its infancy.

In this paper, we address this growing need for preparing automated analysis for GGSLs in two ways. First, we create a dataset of 1 million simulated images (500K GGSLs and 500K non-GGSLs) using a catalog of GGSLs and a state-of-the-art semi-analytic catalog of galaxies named cosmoDC2 into a strong lensing simulation program named PICS. To demonstrate the feasibility of the pipeline for analyzing GGSLs, we use only 120K simulated images (60K GGSLs and 60K non-GGSLs) out of the 1 million images. However, we use the 1 million images to quantify the performance of our pipeline in further studies. Second, we develop an end-to-end machine learning pipeline for automated lens finding and characterization for GGSLs, which consists of four modules—denoising, deblending, lens identification, and lens characterization. We adopt Deep Residual Networks (ResNet)-based fully convolutional neural network architectures for denoising the original pixelized images and removing the lens light in the deblending module. The lens identification and characterization modules perform classification and regression, respectively, which are both built by using ResNet-50 architecture. We demonstrate considerable improvement over lens finding and characterization without the pipeline, and we discuss potential avenues for future improvement.

2 Data Preparation – Simulations

We created a realistic simulated dataset, including 500K GGSLs and 500K non-GGSLs by adopting a catalog of strong lenses Collett (2015) (hereafter, Collett15) and a state-of-the-art extragalactic catalog Korytov et al. (2019). Collett15 provides the mass models and simple light models of both lens and source galaxies; on the other hand, cosmoDC2 provides more realistic light profiles of galaxies containing bulges and disks. To create the inputs for our strong lensing simulation program named PICS Li et al. (2016) by connecting the mass profiles from Collett15 and light profiles from cosmoDC2, we crossmatch the apparent magnitudes, axis ratios, position angles, and redshifts of the galaxies from Collett15 and CosmoDC2.

The mass model of an individual lens galaxy is a singular-isothermal ellipsoid (SIE) as is adopted in Collett15, which not only is analytically tractable but also has been found to be consistent with models of individual lenses and lens statistics on the length scales relevant for strong lensing Koopmans et al. (2006); Gavazzi et al. (2007); Dye et al. (2008). Accordingly, the deflection maps can be given by the parameters of the positions, velocity dispersion, axis ratio, position angle, and redshift of lens as well as redshifts of source galaxies, i,e, . Since can be fixed to by centering the cutouts at the lens galaxies and since the lensing strength (i.e., Einstein radius) can be given by , the parameter array can be simplified as , where is the speed of light, and are the angular diameter distance from the deflector to the source and from the observer to the source, respectively.

We added noise and the point spread function (PSF) to make the images realistic using models of a ground-based-like telescope from Collett (2015); Connolly et al. (2010). The noise model is a mix between read noise, which is a Gaussian-like noise, and shot noise, which is a Poisson-like noise that can be calculated according to the flux in the pixelized images. The PSF model is also a Gaussian function with different full width at half maximum (FWHM) of bands. Examples are shown in Appendix A, Fig. 2. The nonlensing systems are generated in the same way but with the strong lensing effects removed by considering the deflection angles as zeros.

3 Methodology – Pipeline Training and Inference

Our proposed machine learning pipeline consists of four modules—denoising, source separation (deblending), lens searching (classification), and Lens modeling (regression)—as shown in Fig. 1

Figure 1: Machine learning pipeline for analysis of galaxy-scale strong lensed systems.

Denoising is an image restoration approach in which the goal is to recover a clean image from a noisy observation . Traditionally, image denoising has been posed as an inverse problem, where optimization approaches and special purpose regularizers (known as image priors) have been used to achieve this Anwar et al. (2019). Recently, deep-learning-based approaches have been being increasingly adopted and are currently the state-of-the-art algorithms Lim et al. (2017); Zhang et al. (2018) for image denoising. We adopt an enhanced deep super-residual network (EDSR) architecture Lim et al. (2017)

that was proposed for a specific type of image restoration known as super resolution. The residual network (ResNet 

He et al. (2016)

) incorporates skip connections between residual blocks (which consist of convolution, batch normalization, and nonlinear activation layers) in a deep network, and has been shown to work well for a variety of tasks 

Szegedy et al. (2017)

. The residual networks overcome the vanishing gradient problem by learning the function mapping for the residual (with respect to inputs). EDSR proposes to get rid of the batch normalization layers that are deemed unnecessary for the image-to-image tasks. Since the inputs and the outputs for denoising have the same resolution, we removed the up-sampling layer from the EDSR architecture which is composed of

residual blocks each containing two convolutional layers and an ReLU nonlinear activation function. The convolution layers use

kernels and feature channels. The source separation (deblending) module decouples the lensed light and the source galaxy from the observations. The module also utilizes the same modified EDSR architecture that was used for the denoising module. The reason is that the source separation is also an image-to-image task that takes the images with coupled source and foreground galaxies as input and outputs the corresponding lensed or nonlensed source galaxy that is separated from the foreground lens. The classification

module is used to detect the lensing systems from the source separated images. In other words, each of the observed image needs to be classified as to whether it is a lensed or a nonlensed system. We utilize the ResNet-50 architecture to perform this classification. In this architecture, each residual block is three layers deep and consists of convolution layers with channel size increasing from

to and the filter size being either or . The parameter estimation (regression) module takes the source-separated galaxy and predict its characteristics—Einstein radius, axis ratio, and position angle. The parameter estimation also uses the ResNet-50 architecture that has been adopted for the classification, but the last layer is replaced with a fully connected layer that is used to predict the three continuous quantities. We also considered the case where we single model was used for denoising and debleding together, but we found the discussed pipeline (Figure 1) to be the best and hence do not discuss the former in detail, in the interest of space.

4 Results and Discussion

Each of the four modules—denoising, deblending, classification, and regression—was trained individually using the corresponding parts of the simulation data. Once the model was trained, the weights in these models were fixed and deployed as part of the inference pipeline, where the predictions from each module were fed into the subsequent module with the end goal of characterizing the lensed galaxies. The results for training the modules will be discussed first, followed by inference.

4.1 Training

The denoising ESDR model was trained for epochs using images (and tested on ), where the inputs were noisy and blended galaxy images (Noisy-Sim) and the ground truth was the corresponding noiseless blended images (Noiseless-Sim

) from the simulation. We used the peak signal-to-noise ratio (PSNR) to evaluate the denoising accuracy.

The accuracy metrics corresponding on the test data using the trained denoising model are shown in Table 1(a) in Appendix A. First, the difference between Noisy-Sim and Noiseless-Sim is shown to demonstrate the effect of the noise. The mean value for PSNR over all the test data is , indicating that in fact the noise has a significant effect on the image. Then, the ability of the denoising machine learning model to predict the denoised images from the noisy images is measured by comparing the model prediction (Noiseless-ML) with the corresponding ground truth (Noiseless-Sim). The metric value of for PSNR indicates very good noise removal by the trained model. To measure the accuracy of the deblending module, we first compared the Noiseless-Sim with noiseless and deblended simulation data (Deblended-Sim) to characterise the difference between the input and the corresponding ground truth that the prediction seeks to match (Table 1(a)). The mean value over all the test data for PSNR is , which indicates a significant difference between these image pairs. Then, the output from the deblending machine learning model (Deblended-ML) was compared with the ground truth (Deblended-Sim) on the test data to characterize the predictive accuracy. The PSNR value of indicates a good recovery of the source galaxy by deblending.

The classification module was trained over 108,000 images with a batch size of 256, epochs, and a learning rate that decays by half every two epochs starting with a value of with the Adam optimizer. The mean classification accuracy (over two classes) was used to measure the accuracy of the classification model (Table 1(b)). As a baseline we trained a classification model to predict the label directly from the noisy blended simulation images (Noisy-Sim) and evaluated the metrics on the same corresponding test data images. The mean accuracy was found to be , while the classification model trained with Deblended-Sim gave a mean accuracy of , which is a significant improvement overthe baseline.

For the parameter estimation (regression) module, we used the same ResNet-50 architecture but with the last layer being a fully connected one to predict the three continuous parameters. Only the lensed images in the deblended simulation data Deblended-Sim-Len were used to train the regression model. Hence, a total of were used for training and for testing. The same batch size and learning rate schedule used for classification were employed for regression as well, while the number of epochs was increased to . The regression accuracy was measured by using the mean absolute error (MAE) in the normalized ([0,1] w.r.t the maximum and minimum of training data) coordinates, as shown in Table 1(b); the plots comparing the observed and predicted are shown in Fig. 3. The regression accuracy for training is for MAE, which indicates a very good agreement with the ground truth. For the test data, the corresponding MAE is , while the baseline MAE is .

4.2 Inference

With the inference pipeline, we looked at an application scenario where all four modules were used in unison to predict the galaxy parameters from the noisy observations. The input to the denoising module was the full Noisy-Sim data, and we evaluated the denoising performance of the inference data using a procedure similar to that used for the test data, where the similarity between Noisy-Sim data and the Noiseless-Sim was calculated with the three metrics. We found the results to be similar to those obtained on the test data, giving us confidence that there wss no significant change in the noise distribution. Next, we compared the performance of the predictions of the deblending model (Noiseless-ML) with the ground truth (Noiseless-Sim) and again found the metrics to be close to those obtained for test data, thus validating the predictive capability and generalizability of this denoising model beyond the data it’s trained on. For the deblending process, the denoised model predictions (Noiseless-ML) were taken as input and the corresponding deblended outputs were obtained (Deblended-ML-ML) as output. The accuracy was evaluated with respect to the ground truth deblended images Deblended-Sim for Noiseless-ML and Deblended-ML-ML over all the images (Table 1(c)). We found the PSNR for the latter to be , which is lower than that for the test data (in the training phase) but is significantly better than the baseline of .

For the classification inference, we calculated the mean accuracy for the deblending scenario and found that the accuracy is lower than for the test data (in the training phase), with a mean accuracy of compared with . However, this accuracy is much higher than the baseline case accuracy of .

For the regression inference, we calculated the MAE for the deblending scenario and found that their regression accuracy (Table 1(d)) is slightly lower (MAE of ) than the accuracies obtained on the test data but is an improvement over the baseline of .

Limitations of the Denoising/Deblending Modules: Although we obtained good training and test accuracy for all the training modules and significant improvement in the lens finding (classification) over the baseline for the inference pipeline, the lens characterization accuracy improvement in inference is only marginal. We attribute this to two factors: (1) sensitivity of the deblending module to the denoising input, where we found that even though the PSNR is close to ideal, the minor differences with the ground truth ( cause additional features in the deblended image (Fig. 2(e) in Appendix B) for some cases; and (2) the processes of denoising and deblending, which work well for extracting bright lensed arcs but can erase the faint counter-images of the primary lensed images because of the high contrasts between the images of the lenses and the counterimage. These factors may bias the ellipticity and inner density slopes of lens galaxies. We will try to improve these issues by involving larger training sets and more complex models in the follow-up work.

5 Conclusions

Combining high-fidelity simulation data and a systematic machine learning pipeline is crucial for developing fast and accurate GSSL analysis techniques for future cosmological surveys. To this end, we proposed a dataset of 1 million synthetic images (500k GGSLs and 500k non-GGSLs), which is the largest simulation for GGSL ever made, and we developed an end-to-end machine learning pipeline with separate modules for denoising, deblending, lens searching, and lens modeling that is trained on this data. We demonstrate good denoising and deblending performance on both the training and inference (compared with the ground truth) and, consequently, a significant improvement in classification and regression over the baseline (working directly with the noisy blended data). We also identify a few limitations of the simulation data, which lead to underestimated contamination from substructures in the context of both mass and light profiles of galaxies and in the denoising/deblending model, which either misses counterimages or introduces additional artifacts. We plan to address these issues by scaling up to the full million image dataset, training the denoising and deblending model together, and employing a hyperparameter search to improve the classification and regression accuracies. In addition, we will explore uncertainty quantification for the lens modeling output through probabilistic regression. Eventually, the pipeline intended to be used for real-time lens finding and characterization with data from next-generation large-scale sky surveys such as Euclid, LSST, and WFIRST.

Acknowledgments

This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. We gratefully acknowledge the computing resources provided and operated by the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory. The work is also supported by the UK Science and Technology Facilities Council (STFC).

References

  • [1] S. Anwar, S. Khan, and N. Barnes (2019) A deep journey into super-resolution: a survey. arXiv preprint arXiv:1904.07523. Cited by: §3.
  • [2] D. Bayer, S. Chatterjee, L. V. E. Koopmans, S. Vegetti, J. P. McKean, T. Treu, and C. D. Fassnacht (2018-03) Observational constraints on the sub-galactic matter-power spectrum from galaxy-galaxy strong gravitational lensing. ArXiv e-prints. External Links: 1803.05952 Cited by: §1.
  • [3] M. B. Bayliss, K. Sharon, A. Acharyya, M. D. Gladders, J. R. Rigby, F. Bian, R. Bordoloi, J. Runnoe, H. Dahle, L. Kewley, M. Florian, T. Johnson, and R. Paterno-Mahler (2017-aug.) Spatially Resolved Patchy Ly Emission within the Central Kiloparsec of a Strongly Lensed Quasar Host Galaxy at z = 2.8. The Astrophysical Journal Letters 845, pp. L14. External Links: 1708.00453, Document Cited by: §1.
  • [4] J. Brehmer, S. Mishra-Sharma, J. Hermans, G. Louppe, and K. Cranmer (2019-Sept.) Mining for dark matter substructure: inferring subhalo population properties from strong lenses with machine learning. arXiv e-prints, pp. arXiv:1909.02005. External Links: 1909.02005 Cited by: §1.
  • [5] T. E. Collett and M. W. Auger (2014-sept.) Cosmological constraints from the double source plane lens SDSSJ0946+1006. Monthly Notices of the Royal Astronomical Society 443, pp. 969–976. External Links: 1403.5278, Document Cited by: §1.
  • [6] T. E. Collett (2015-09) The Population of Galaxy-Galaxy Strong Lenses in Forthcoming Optical Imaging Surveys. The Astrophysical Journal 811, pp. 20. External Links: 1507.02657, Document Cited by: §1, §2, §2.
  • [7] A. J. Connolly, J. Peterson, J. G. Jernigan, R. Abel, J. Bankert, C. Chang, C. F. Claver, R. Gibson, D. K. Gilmore, E. Grace, R. L. Jones, Z. Ivezic, J. Jee, M. Juric, S. M. Kahn, V. L. Krabbendam, S. Krughoff, S. Lorenz, J. Pizagno, A. Rasmussen, N. Todd, J. A. Tyson, and M. Young (2010) Simulating the lsst system. Modeling, Systems Engineering, and Project Management for Astronomy IV 7738. External Links: Document, Link Cited by: §2.
  • [8] S. Dye, N. W. Evans, V. Belokurov, S. J. Warren, and P. Hewett (2008-07) Models of the Cosmic Horseshoe gravitational lens J1004+4112. Monthly Notices of the Royal Astronomical Society 388 (1), pp. 384–392. External Links: Document, 0804.4002 Cited by: §2.
  • [9] S. Dye, C. Furlanetto, L. Dunne, S. A. Eales, M. Negrello, H. Nayyeri, P. P. van der Werf, S. Serjeant, D. Farrah, M. J. Michałowski, M. Baes, L. Marchetti, A. Cooray, D. A. Riechers, and A. Amvrosiadis (2018-06) Modelling high-resolution ALMA observations of strongly lensed highly star-forming galaxies detected by Herschel. Monthly Notices of the Royal Astronomical Society 476, pp. 4383–4394. External Links: Document Cited by: §1.
  • [10] R. Gavazzi, T. Treu, J. D. Rhodes, L. V. E. Koopmans, A. S. Bolton, S. Burles, R. J. Massey, and L. A. Moustakas (2007-Sept.) The Sloan Lens ACS Survey, IV: the mass density profile of early-type galaxies out to 100 effective radii. The Astrophysical Journal 667 (1), pp. 176–190. External Links: Document, astro-ph/0701589 Cited by: §2.
  • [11] D. Gilman, S. Birrer, T. Treu, and C. R. Keeton (2017-dec.) Probing the nature of dark matter by forward modeling flux ratios in strong gravitational lenses. ArXiv e-prints. External Links: 1712.04945 Cited by: §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 770–778. Cited by: §3.
  • [13] J. N. Hewitt, E. L. Turner, D. P. Schneider, B. F. Burke, and G. I. Langston (1988-jjne) Unusual radio source MG1131+0456 - A possible Einstein ring. Nature 333, pp. 537–540. External Links: Document Cited by: §1.
  • [14] Y. D. Hezaveh, N. Dalal, D. P. Marrone, Y.-Y. Mao, W. Morningstar, D. Wen, R. D. Blandford, J. E. Carlstrom, C. D. Fassnacht, G. P. Holder, A. Kemball, P. J. Marshall, N. Murray, L. Perreault Levasseur, J. D. Vieira, and R. H. Wechsler (20may) Detection of lensing substructure using ALMA observations of the Dusty Galaxy SDP.81. The Astrophysical Journal 823, pp. 37. External Links: 1601.01388, Document Cited by: §1.
  • [15] Y. D. Hezaveh, L. P. Levasseur, and P. J. Marshall (2017-aug.) Fast automated analysis of strong gravitational lenses with convolutional neural networks. Nature 548, pp. 555–557. External Links: 1708.08842, Document Cited by: §1.
  • [16] L. V. E. Koopmans, T. Treu, A. S. Bolton, S. Burles, and L. A. Moustakas (2006-Oct.) The Sloan Lens ACS Survey, III: The structure and formation of early-type galaxies and their evolution since z ~1. The Astrophysical Journal 649 (2), pp. 599–615. External Links: Document, astro-ph/0601628 Cited by: §2.
  • [17] D. Korytov, A. Hearin, E. Kovacs, P. Larsen, E. Rangel, J. Hollowed, A. J. Benson, K. Heitmann, Y. Mao, A. Bahmanyar, C. Chang, D. Campbell, J. Derose, H. Finkel, N. Frontiere, E. Gawiser, S. Habib, B. Joachimi, F. Lanusse, N. Li, R. Mandelbaum, C. Morrison, J. A. Newman, A. Pope, E. Rykoff, M. Simet, C. To, V. Vikraman, R. H. Wechsler, and M. White (2019-07) CosmoDC2: A Synthetic Sky Catalog for Dark Energy Science with LSST. arXiv e-prints, pp. arXiv:1907.06530. External Links: 1907.06530 Cited by: §2.
  • [18] J. Kummer, F. Kahlhoefer, and K. Schmidt-Hoberg (2018-feb.) Effective description of dark matter self-interactions in small dark matter haloes. Monthly Notices of the Royal Astronomical Society 474, pp. 388–399. External Links: 1706.04794, Document Cited by: §1.
  • [19] R. Küng, P. Saha, I. Ferreras, E. Baeten, J. Coles, C. Cornen, C. Macmillan, P. Marshall, A. More, L. Oswald, A. Verma, and J. K. Wilcox (2018-03) Models of gravitational lens candidates from Space Warps CFHTLS. Monthly Notices of the Royal Astronomical Society 474, pp. 3700–3713. External Links: 1711.07297, Document Cited by: §1.
  • [20] F. Lanusse, Q. Ma, N. Li, T. E. Collett, C.-L. Li, S. Ravanbakhsh, R. Mandelbaum, and B. Póczos (2018-jan.) CMU DeepLens: deep learning for automatic image-based galaxy-galaxy strong lens finding. Monthly Notices of the Royal Astronomical Society 473, pp. 3895–3906. External Links: 1703.02642, Document Cited by: §1.
  • [21] L. P. Levasseur, Y. D. Hezaveh, and R. H. Wechsler (2017) Uncertainties in parameters estimated with neural networks: application to strong gravitational lensing. arXiv preprint arXiv:1708.08843. Cited by: §1.
  • [22] N. Li, M. D. Gladders, E. M. Rangel, M. K. Florian, L. E. Bleem, K. Heitmann, S. Habib, and P. Fasel (2016-Sept.) PICS: simulations of strong gravitational lensing in galaxy clusters. The Astrophysical Journal 828 (1), pp. 54. External Links: Document, 1511.03673 Cited by: §2.
  • [23] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee (2017) Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144. Cited by: §3.
  • [24] R. B. Metcalf, M. Meneghetti, C. Avestruz, F. Bellagamba, C. R. Bom, E. Bertin, R. Cabanac, F. Courbin, A. Davies, E. Decencière, R. Flamary, R. Gavazzi, M. Geiger, P. Hartley, M. Huertas-Company, N. Jackson, C. Jacobs, E. Jullo, J.-P. Kneib, L. V. E. Koopmans, F. Lanusse, C.-L. Li, Q. Ma, M. Makler, N. Li, M. Lightman, C. E. Petrillo, S. Serjeant, C. Schäfer, A. Sonnenfeld, A. Tagore, C. Tortora, D. Tuccillo, M. B. Valentín, S. Velasco-Forero, G. A. Verdoes Kleijn, and G. Vernardos (2019-05) The strong gravitational lens finding challenge. Astronomy & Astrophysics 625, pp. A119. External Links: 1802.03609, Document Cited by: §1.
  • [25] W. R. Morningstar, Y. D. Hezaveh, L. P. Levasseur, R. D. Blandford, P. J. Marshall, P. Putzky, and R. H. Wechsler (2018) Analyzing interferometric observations of strong gravitational lenses with recurrent and convolutional neural networks. arXiv preprint arXiv:1808.00011. Cited by: §1.
  • [26] W. R. Morningstar, L. P. Levasseur, Y. D. Hezaveh, R. Blandford, P. Marshall, P. Putzky, T. D. Rueter, R. Wechsler, and M. Welling (2019) Data-driven reconstruction of gravitationally lensed galaxies using recurrent inference machines. arXiv preprint arXiv:1901.01359. Cited by: §1.
  • [27] J. Pearson, N. Li, and S. Dye (2019-sept.) The use of convolutional neural networks for modelling large optically-selected strong galaxy-lens samples. Monthly Notices of the Royal Astronomical Society 488, pp. 991–1004. External Links: 1904.06199, Document Cited by: §1.
  • [28] A. Rana, D. Jain, S. Mahajan, A. Mukherjee, and R. F. L. Holanda (2017-07) Probing the cosmic distance duality relation using time delay lenses. Journal of Cosmology and Astroparticle Physics 7, pp. 010. External Links: 1705.04549, Document Cited by: §1.
  • [29] P. Sharda, C. Federrath, E. da Cunha, A. M. Swinbank, and S. Dye (2018-07) Testing star formation laws in a starburst galaxy at redshift 3 resolved with ALMA. Monthly Notices of the Royal Astronomical Society 477, pp. 4380–4390. External Links: 1712.03661, Document Cited by: §1.
  • [30] Y. Shu, A. S. Bolton, S. Mao, C. S. Kochanek, I. Pérez-Fournon, M. Oguri, A. D. Montero-Dorta, M. A. Cornachione, R. Marques-Chaves, Z. Zheng, J. R. Brownstein, and B. Ménard (2016-dec.) The BOSS Emission-line Lens Survey. IV. Smooth Lens Models for the BELLS GALLERY Sample. The Astrophysical Journal 833, pp. 264. External Links: 1608.08707, Document Cited by: §1.
  • [31] Y. Shu, A. S. Bolton, L. A. Moustakas, D. Stern, A. Dey, J. R. Brownstein, S. Burles, and H. Spinrad (2016-03) Kiloparsec Mass/Light Offsets in the Galaxy Pair-Ly Emitter Lens System SDSS J1011+0143. The Astrophysical Journal 820, pp. 43. External Links: 1602.02927, Document Cited by: §1.
  • [32] A. Sonnenfeld, T. Treu, P. J. Marshall, S. H. Suyu, R. Gavazzi, M. W. Auger, and C. Nipoti (2015-feb.) The SL2S Galaxy-scale Lens Sample. V. Dark Matter Halos and Stellar IMF of Massive Early-type Galaxies Out to Redshift 0.8. The Astrophysical Journal 800, pp. 94. External Links: 1410.1881, Document Cited by: §1.
  • [33] S. H. Suyu, V. Bonvin, F. Courbin, C. D. Fassnacht, C. E. Rusu, D. Sluse, T. Treu, K. C. Wong, M. W. Auger, X. Ding, S. Hilbert, P. J. Marshall, N. Rumbaugh, A. Sonnenfeld, M. Tewes, O. Tihhonova, A. Agnello, R. D. Blandford, G. C.-F. Chen, T. Collett, L. V. E. Koopmans, K. Liao, G. Meylan, and C. Spiniello (2017-07) H0LiCOW - I. H Lenses in COSMOGRAIL’s Wellspring: program overview. Monthly Notices of the Royal Astronomical Society 468, pp. 2590–2604. External Links: 1607.00017, Document Cited by: §1.
  • [34] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi (2017)

    Inception-v4, Inception-ResNet and the impact of residual connections on learning

    .
    In AAAI, Vol. 4, pp. 12. Cited by: §3.
  • [35] S. Vegetti, L. V. E. Koopmans, M. W. Auger, T. Treu, and A. S. Bolton (2014-aug.) Inference of the cold dark matter substructure mass function at z = 0.2 using strong gravitational lenses. Monthly Notices of the Royal Astronomical Society 442, pp. 2017–2035. External Links: 1405.3666, Document Cited by: §1.
  • [36] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481. Cited by: §3.

Appendix A Supplemental Tables and Figures

Figure 2: First row: Noisy-Sim data from the simulation. Second row: Noiseless-ML output from the denoising module at inference. Third row: Deblended-ML-ML output from the deblending model with the denoised model input at inference. Fourth row:Deblended-Sim-ML output from the deblending model with the denoised simulation input (Noiseless-Sim) at inference.

(a) Training

(b) Testing

Figure 3: Comparison of the observed (Noiseless-Sim) and predicted (Noiseless-ML) data corresponding to training and testing data for lens characterization (regression) during the training phase.

max width= Training PSNR Denoising Noisy-Sim 22.56 Noiseless-ML 47.91 Deblending Noiseless-Sim 14.47 Deblended-ML 35.03

(a) Training – PSNR metrics for denoising and deblending on images

max width= Training Mean acc 0.82 0.34 Noiseless-Sim 0.99 MAE RMSE Regression Noisy-blended-Sim 0.08 0.03 0.05

(b) Training – classification and regression metrics on images

max width= Inference PSNR Denoising Noisy-Sim 22.71 Noiseless-ML 47.15 Deblending Noiseless-ML 13.59 Deblended-ML-ML 26.87

(c) Inference - PSNR metrics for Denoising and Deblending on images

max width= Inference Mean acc 0.90 1.22 MAE 0.06 0.10

(d) Inference – classification and regression metrics on images
Table 1: Accuracy metrics for training and inference with the machine learning pipeline.

Appendix B Limitations of the Simulation Model

We adopt SIE as the mass model of the lenses, which is insufficient for studying the influence of the subtle structures of the lens galaxies on the performance of analyzing strong lenses using our pipeline. To make the simulation more realistic, we plan to adopt the particle data from cosmological N-body simulations and stellar mass distributions from semi-analytical models for presenting the mass distribution of lens galaxies. Furthermore, for image simulation, we involve the images of sources and lenses only; but in real observations, images of galaxies on the line of sight are also considerable, and the cosmoDC2 light cone is helpful to include these effects. The study is focused on ground-based-like telescopes such as LSST, so the light profiles with bulges and disks are sufficient because of the coarse pixelization and large PSF. However, for the case of space-based-like telescopes such as Euclid and WFIRST, the light profile lacks detailed structures of galaxies such as spirals and clumps. We are attempting to attach substructures onto the galaxies by using GANs in a parallel project. Another issue is the overkill problem in the processes of denoising and deblending; that is, faint counterimages of the primary lensed images can be erased on some level because of the high contrasts between the images of the lenses and the counterimages. We will try to avoid potential biases because of this problem by using more advanced algorithms in our follow-up work.

The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. http://energy.gov/downloads/doe-public-access-plan