A Detailed Study of Interpretability of Deep Neural Network based Top Taggers

10/09/2022
by   Ayush Khot, et al.
0

Recent developments in the methods of explainable AI (xAI) methods allow us to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed for identifying jets coming from top quark decay in the high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing such top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different xAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing xAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help us to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. While the primary focus of this work remains a detailed study of interpretability of DNN-based top tagger models, it also features state-of-the art performance obtained from modified implementation of existing networks.

READ FULL TEXT

page 12

page 13

page 14

page 18

page 20

page 21

research
11/23/2022

Interpretability of an Interaction Network for identifying H → bb̅ jets

Multivariate techniques and machine learning models have found numerous ...
research
01/18/2022

XAI Model for Accurate and Interpretable Landslide Susceptibility

Landslides are notoriously difficult to predict. Deep neural networks (D...
research
07/27/2022

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

The last decade of machine learning has seen drastic increases in scale ...
research
11/05/2019

Interpretability Study on Deep Learning for Jet Physics at the Large Hadron Collider

Using deep neural networks for identifying physics objects at the Large ...
research
02/05/2021

Convolutional Neural Network Interpretability with General Pattern Theory

Ongoing efforts to understand deep neural networks (DNN) have provided m...
research
12/01/2022

Experimental Observations of the Topology of Convolutional Neural Network Activations

Topological data analysis (TDA) is a branch of computational mathematics...
research
03/04/2020

Transformation Importance with Applications to Cosmology

Machine learning lies at the heart of new possibilities for scientific d...

Please sign up or login with your details

Forgot password? Click here to reset