Learning Physics from the Machine: An Interpretable Boosted Decision Tree Analysis for the Majorana Demonstrator

07/21/2022
∙
by   I. J. Arnquist, et al.
∙
0
∙

The Majorana Demonstrator is a leading experiment searching for neutrinoless double-beta decay with high purity germanium detectors (HPGe). Machine learning provides a new way to maximize the amount of information provided by these detectors, but the data-driven nature makes it less interpretable compared to traditional analysis. An interpretability study reveals the machine's decision-making logic, allowing us to learn from the machine to feedback to the traditional analysis. In this work, we have presented the first machine learning analysis of the data from the Majorana Demonstrator; this is also the first interpretable machine learning analysis of any germanium detector experiment. Two gradient boosted decision tree models are trained to learn from the data, and a game-theory-based model interpretability study is conducted to understand the origin of the classification power. By learning from data, this analysis recognizes the correlations among reconstruction parameters to further enhance the background rejection performance. By learning from the machine, this analysis reveals the importance of new background categories to reciprocally benefit the standard Majorana analysis. This model is highly compatible with next-generation germanium detector experiments like LEGEND since it can be simultaneously trained on a large number of detectors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 03/03/2022

KamNet: An Integrated Spatiotemporal Deep Neural Network for Rare Event Search in KamLAND-Zen

Rare event searches allow us to search for new physics at energy scales ...
research
∙ 08/09/2021

An Interpretable Approach to Hateful Meme Detection

Hateful memes are an emerging method of spreading hate on the internet, ...
research
∙ 03/04/2019

Deep learning based pulse shape discrimination for germanium detectors

Discrimination between different event signatures is a key requirement f...
research
∙ 10/22/2018

Assessing the Stability of Interpretable Models

Interpretable classification models are built with the purpose of provid...
research
∙ 03/27/2020

A copula-based visualization technique for a neural network

Interpretability of machine learning is defined as the extent to which h...
research
∙ 12/03/2020

Neural Prototype Trees for Interpretable Fine-grained Image Recognition

Interpretable machine learning addresses the black-box nature of deep ne...
research
∙ 06/10/2023

Interpretable Differencing of Machine Learning Models

Understanding the differences between machine learning (ML) models is of...

Please sign up or login with your details

Forgot password? Click here to reset