ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets

11/19/2021
by   Hamed Valizadegan, et al.
15

The kepler and TESS missions have generated over 100,000 potential transit signals that must be processed in order to create a catalog of planet candidates. During the last few years, there has been a growing interest in using machine learning to analyze these data in search of new exoplanets. Different from the existing machine learning works, ExoMiner, the proposed deep learning classifier in this work, mimics how domain experts examine diagnostic tests to vet a transit signal. ExoMiner is a highly accurate, explainable, and robust classifier that 1) allows us to validate 301 new exoplanets from the MAST Kepler Archive and 2) is general enough to be applied across missions such as the on-going TESS mission. We perform an extensive experimental study to verify that ExoMiner is more reliable and accurate than the existing transit signal classifiers in terms of different classification and ranking metrics. For example, for a fixed precision value of 99 all exoplanets in the test set (i.e., recall=0.936) while this rate is 76.3 for the best existing classifier. Furthermore, the modular design of ExoMiner favors its explainability. We introduce a simple explainability framework that provides experts with feedback on why ExoMiner classifies a transit signal into a specific class label (e.g., planet candidate or not planet candidate).

READ FULL TEXT

page 5

page 31

page 37

page 38

page 39

research
10/18/2019

Identifying the Most Explainable Classifier

We introduce the notion of pointwise coverage to measure the explainabil...
research
07/10/2020

Machine Learning Explainability for External Stakeholders

As machine learning is increasingly deployed in high-stakes contexts aff...
research
07/25/2019

Personalised novel and explainable matrix factorisation

Recommendation systems personalise suggestions to individuals to help th...
research
05/23/2023

Balancing Explainability-Accuracy of Complex Models

Explainability of AI models is an important topic that can have a signif...
research
03/26/2018

HAMLET: Interpretable Human And Machine co-LEarning Technique

Efficient label acquisition processes are key to obtaining robust classi...
research
10/10/2022

FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

Three key properties that are desired of trustworthy machine learning mo...
research
01/03/2023

Identifying Exoplanets with Deep Learning. V. Improved Light Curve Classification for TESS Full Frame Image Observations

The TESS mission produces a large amount of time series data, only a sma...

Please sign up or login with your details

Forgot password? Click here to reset