1 Introduction
Over the past decade, the study of extrasolar planets has evolved rapidly from plain detection and identification to comprehensive categorization and characterization of exoplanet systems and their atmospheres. Atmospheric retrieval, the inverse modeling technique used to determine an exoplanetary atmosphere’s temperature structure and composition from an observed spectrum, is both timeconsuming and computeintensive, requiring complex algorithms that compare thousands to millions of atmospheric models to the observational data to find the most probable values and associated uncertainties for each model parameter (madhusudhan2018, ). For rocky, terrestrial planets, the retrieved atmospheric composition can give insight into the surface fluxes of gaseous species necessary to maintain the stability of that atmosphere, which may in turn provide insight into the geological and/or biological processes active on the planet (SchwietermanEtal2017asbioBiosignaturesReview, ). These atmospheres contain many molecules, some of them biosignatures, spectral fingerprints indicative of biological activity, which will become observable with the next generation of telescopes (FujiiEtal2018asbioBiosignatures, ). Runtimes of traditional retrieval models scale with the number of model parameters, so as more molecular species are considered, runtimes can become prohibitively long. Recent advances in machine learning (ML) and computer vision (krizhevsky2012, ; he2016deep, ) offer new ways to reduce the time to perform a retrieval by orders of magnitude (marquez2018supervised, ; zingales2018, ), given a sufficient data set to train with. Here we present an MLbased retrieval framework called Intelligent exoplaNet Atmospheric RetrievAl (INARA) that consists of a Bayesian deep learning model for retrieval and a data set of 3,000,000 synthetic rocky exoplanetary spectra generated using the NASA Planetary Spectrum Generator (PSG) (villanueva2018, ). Our work represents the first ML retrieval model for rocky, terrestrial exoplanets and the first synthetic data set of terrestrial spectra generated at this scale.
2 Background
Traditionally, the study of exoplanetary atmospheres has been done by fitting forward models to observational data, which is based on the relative decrease in flux when the exoplanet is in front of or behind its host star (Crossfield2015paspAtmospheres, ; DemingSeager2017jgreExoplanetAtmospheres, ). This is usually performed using a Monte Carlo sampling method in a Bayesian framework to propose atmospheric models, simulate the spectrum, and compare it to the observed data (e.g., skilling2004, ; terBraak2008, ). Degeneracies among atmospheric parameters complicate this process, necessitating the evaluation of hundreds of thousands to millions of atmospheric models to fully explore the parameter space. This results in a posterior distribution which characterizes these degeneracies and informs the relative probability of the ranges of values considered for each model parameter. While these sampling methods are executed in parallel, this task still requires a significant amount of computational time (madhusudhan2018, ).
Recently, the exoplanet community has begun to apply supervised ML methods to the problem of atmospheric retrieval. Waldmann (Waldmann2016apjDreamingAtmospheres, )
used a deep belief network to identify molecular species in an observed spectrum, paving the way for more advanced ML applications. Building upon this, two ML retrieval algorithms have been developed to date: ExoGAN
(zingales2018, ) and HELA (marquez2018supervised, ). These produce results in seconds to minutes, compared to on the order of 100 CPU hours for the aforementioned traditional Monte Carlo samplingbased methods. ExoGAN utilizes a generative adversarial network (GAN) (goodfellow2014generative, )to approximate the data distribution of realistic spectra and then uses the trained GAN to make predictions using inpainting to infer planetary conditions from observed spectra. HELA uses random forests
(ho1995random, ) to similarly make predictions of planetary parameters from observed spectra. Both models produce results that are generally consistent with conventional retrieval methods. Note that these models specialize in hot Jupiters, a class of gas giant exoplanets on very shortperiod orbits, and consider less than a handful of molecules.based on the best performing model; training limited to 64 epochs on 110,000 parameter–spectra pairs hence some uncertainties reflect calibration issues.
3 Methods
We train a deep neural network in a supervised setting to predict exoplanet atmospheric parameters
given an observed spectrum , using a training set that we generate by running the NASA PSG^{1}^{1}1https://psg.gsfc.nasa.gov/ (villanueva2018, ) simulator to get , where the parameters are sampled from a physicallymotivated prior model . Spectraare vectors (of length 4379) describing radiation intensity as a scalar function of wavelength and therefore we explore a series of 1D convolutional neural network (CNN) configurations. In order to train our CNN models, we generate a data set encompassing spectra based on a given planetary system model, where we consider F, G, K, and Mtype main sequence stars. Observations are simulated using an instrument model of the Large UltraViolet/Optical/InfraRed Surveyor (LUVOIR), a design concept for a multiwavelength space observatory, but with a much higher resolution. The prior model
comprises planetary parameters (radius, mass, surface pressure, semimajor axis, pressure temperature profile) and atmospheric compositions. Planetary parameters are randomly selected from ranges and distributions consistent with our solar system and observations of other systems (boyajian2013, ; boyajian2012, ; robinson2014, ; rogers2015, ; sotin2007, ; Zahnle2017, ). The ranges for these parameters are chosen such that a planet in an Earthlike orbit can vary in temperature by a few hundred Kelvin. We consider 12 molecules based on the composition of atmospheres in our solar system as well as the observability of species (FujiiEtal2018asbioBiosignatures, ): HO, CO, O, N, CH, NO, CO, O, SO, NH, CH, and NO. Concentrations are randomly selected within a range based on the observed composition of atmospheres in our solar system. While cloud mixing ratios are calculated, clouds are ignored in our simulations due to the computational burden as even poor modeling efforts increase computational time by a factor of 50.We use the Monte Carlo dropout approximation to produce predictive distributions over parameters . Dropout is a common regularization technique in neural networks to prevent overfitting and allow for a more generalizable model (hinton2012, ). It has recently been shown that applying dropout at both training and test time is equivalent to making a variational approximation to the posterior distribution over the network weights (gal2016, ). Each dropout mask removes a certain proportion of weights by setting them to zero during a forward pass. Therefore, multiple forward passes with different dropout masks for the same input gives a set of predictive samples that build a predictive distribution. Through implementing dropout both at training and test time, we are effectively sampling from the posterior over weights of the network. This distribution over the weights enables us to approximate a predictive distribution over the parameters of an exoplanet given an observed spectrum.
INARA is implemented in Python using PyTorch, and the source code and a Docker image are publicly available.
^{2}^{2}2https://gitlab.com/frontierdevelopmentlab/astrobiology/inara To interact with NASA Goddard PSG, the simulator at the core of our spectrum generation setup, we implemented a Python package called pypsg^{3}^{3}3https://gitlab.com/frontierdevelopmentlab/astrobiology/pypsg that handles data generation in PSG format and httpbased twoway communication with PSG servers. The INARA codebase covers the running of server instances for data generation, ML model training, and inference in a distributed fashion utilizing the Google Cloud infrastructure. For the generation of the data set of 3M parameter and spectrum pairs, we employed approximately 2,000 highend VMs (groups of 16 INARA instances connected to one PSG node).4 Preliminary Results, Discussion, and New Horizons
Method  CPU time for inference  Number of molecules retrieved 

Traditional  Hundreds of hours  Userspecified 
ExoGAN (zingales2018, )  Minutes  4 
HELA (marquez2018supervised, )  Seconds  3 
INARA  Seconds  12 
We performed a grid search over model architectures and training hyperparameters, exploring over 70 combinations of different architectures (linear regression, feedforward neural networks, and CNNs), learning rates in
, activation functions in {tanh, ReLU, ELU}, and optimization algorithms {ADAM, SGD, ADAdelta, RMSProp}. No dropout was used in this phase. Due to time constraints, model training was set to 64 epochs in all cases, using a training set of 110,000 parameter–spectra pairs. We used a mean square error (MSE) loss, and employed early stopping with a validation set of size 10,000 to avoid overfitting. 1D CNNs produced the best results, and we settled on a model with the configuration Conv1d(64)–tanh–MaxPool–Conv1d(64)–relu–MaxPool–Conv1d(128)–relu–MaxPool–Conv1d(256)–relu–FC(256)–relu–FC(12), which has approximately 18M trainable parameters.
Here we report results of the best 1D CNN model trained using 110,000 parameter–spectra pairs out of the 3M data set, leaving results with the full training set to future work. The prediction for 1,000 spectra is illustrated in Figure 1. HO, CO, O, N and CH are shown in the five plots in Figure 1
top row, where each dot represents the average of 600 runs of our model with dropout for each planet. The details for predictive joint distributions for a random planet among those simulated is shown in the two bottom plots of Figure
1. The true value, indicated by the red star and the red line, falls within the predictive distribution for both parameters. Figure 2 in the appendix presents predictions for the full set of 12 molecules. INARA outperforms traditional Monte Carlobased approaches by several orders of magnitude while computing a larger set of parameters and atmospheric molecules (Table 1).Thanks to the computational resources we had access to, our present data set is the largest collection of rocky planet spectra to date. For the first time in ML atmospheric retrieval, we adopted Monte Carlo dropout (gal2016, ), providing a predictive distribution comparable to the posterior distributions yielded by traditional, Bayesian approaches. Further investigation is necessary to determine how this predictive distribution compares to the posterior distributions of traditional methods. While we obtained good results, our search for the best model is incomplete, and a thorough exploration of different neural network architectures is desirable. In addition, a more detailed data set (i.e., in terms of wavelength, selfconsistency, and the presence of clouds/hazes) could be used with INARA to generate more reliable and scientificallyinformative models.
Acknowledgements
This work originated at the NASA Frontier Development Lab (FDL), an accelerated research program focused on finding solutions for spacerelated scientific challenges using ML, with support by NASA, the SETI Institute, and industry partners Google Cloud, IBM, Intel, Nvidia, XPRIZE, KBRwyle, KX, Luxembourg Space Resources, and Lockheed Martin. We thank the NASA Astrobiology Institute for their support of the astrobiology challenges at FDL 2018; our amazing mentors for their fantastic support and guidance; Geronimo Villanueva at NASA Goddard Space Flight Center for offering extensive support in setting up PSG on Google Cloud; Sara Jennings, Shyla Spicer, and James Parr for organizing NASA FDL; the SETI Institute and NASA Ames Research Center for hosting us; and all industry partners involved in the program.
References
 [1] T. S. Boyajian, K. von Braun, G. van Belle, C. Farrington, G. Schaefer, J. Jones, R. White, H. A. McAlister, A. Theo, S. Ridgway, et al. Stellar diameters and temperatures. III. Mainsequence A, F, G, and K stars: additional highprecision measurements and empirical relations. The Astrophysical Journal, 771(1):40, 2013.
 [2] T. S. Boyajian, K. Von Braun, G. Van Belle, H. A. McAlister, A. Theo, S. R. Kane, P. S. Muirhead, J. Jones, R. White, G. Schaefer, et al. Stellar diameters and temperatures. II. Mainsequence Kand Mstars. The Astrophysical Journal, 757(2):112, 2012.
 [3] I. J. M. Crossfield. Observations of Exoplanet Atmospheres. Publications of the Astronomical Society of the Pacific, 127:941, Oct. 2015.
 [4] L. D. Deming and S. Seager. Illusion and reality in the atmospheres of exoplanets. Journal of Geophysical Research (Planets), 122:53–75, Jan. 2017.
 [5] Y. Fujii, D. Angerhausen, R. Deitrick, S. DomagalGoldman, J. L. Grenfell, Y. Hori, S. R. Kane, E. Pallé, H. Rauer, N. Siegler, K. Stapelfeldt, and K. B. Stevenson. Exoplanet Biosignatures: Observational Prospects. Astrobiology, 18:739–778, June 2018.
 [6] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050–1059, 2016.
 [7] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.

[8]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 770–778, 2016.  [9] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
 [10] T. K. Ho. Random decision forests. In Third International Conference on Document Analysis and Recognition, volume 1, pages 278–282. IEEE, 1995.
 [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
 [12] N. Madhusudhan. Atmospheric retrieval of exoplanets. Handbook of Exoplanets, pages 1–30, 2018.
 [13] P. MárquezNeila, C. Fisher, R. Sznitman, and K. Heng. Supervised machine learning for analysing spectra of exoplanetary atmospheres. Nature Astronomy, 2(9):719, 2018.
 [14] T. D. Robinson and D. C. Catling. Common 0.1 bar tropopause in thick atmospheres set by pressuredependent infrared transparency. Nature Geoscience, 7(1):12, 2014.
 [15] L. A. Rogers. Most 1.6 earthradius planets are not rocky. The Astrophysical Journal, 801(1):41, 2015.
 [16] E. W. Schwieterman, N. Y. Kiang, M. N. Parenteau, C. E. Harman, S. DasSarma, T. M. Fisher, G. N. Arney, H. E. Hartnett, C. T. Reinhard, S. L. Olson, V. S. Meadows, C. S. Cockell, S. I. Walker, J. L. Grenfell, S. Hegde, S. Rugheimer, R. Hu, and T. W. Lyons. Exoplanet Biosignatures: A Review of Remotely Detectable Signs of Life. Astrobiology, 18:663–708, June 2018.
 [17] J. Skilling. Nested sampling. In AIP Conference Proceedings, volume 735, pages 395–405. AIP, 2004.
 [18] C. Sotin, O. Grasset, and A. Mocquet. Mass–radius curve for extrasolar Earthlike planets and ocean planets. Icarus, 191(1):337–351, 2007.

[19]
C. J. ter Braak and J. A. Vrugt.
Differential evolution Markov chain with snooker updater and fewer chains.
Statistics and Computing, 18(4):435–446, 2008.  [20] G. L. Villanueva, M. D. Smith, S. Protopapa, S. Faggi, and A. M. Mandell. Planetary spectrum generator: an accurate online radiative transfer suite for atmospheres, comets, small bodies and exoplanets. Journal of Quantitative Spectroscopy and Radiative Transfer, 2018.
 [21] I. P. Waldmann. Dreaming of atmospheres. The Astrophysical Journal, 820(2):107, 2016.
 [22] K. J. Zahnle and D. C. Catling. The cosmic shoreline: The evidence that escape determines which planets have atmospheres, and what this may mean for proxima centauri b. The Astrophysical Journal, 843(2):122, 2017.
 [23] T. Zingales and I. P. Waldmann. Exogan: Retrieving exoplanetary atmospheres using deep convolutional generative adversarial networks. arXiv preprint arXiv:1806.02906, 2018.