Explainable AI for High Energy Physics

06/14/2022
by   Mark S. Neubauer, et al.
0

Neural Networks are ubiquitous in high energy physics research. However, these highly nonlinear parameterized functions are treated as black boxes- whose inner workings to convey information and build the desired input-output relationship are often intractable. Explainable AI (xAI) methods can be useful in determining a neural model's relationship with data toward making it interpretable by establishing a quantitative and tractable relationship between the input and the model's output. In this letter of interest, we explore the potential of using xAI methods in the context of problems in high energy physics.

READ FULL TEXT
research
11/23/2022

Interpretability of an Interaction Network for identifying H → bb̅ jets

Multivariate techniques and machine learning models have found numerous ...
research
11/17/2020

On the Relationship Between KR Approaches for Explainable Planning

In this paper, we build upon notions from knowledge representation and r...
research
07/13/2023

Is Task-Agnostic Explainable AI a Myth?

Our work serves as a framework for unifying the challenges of contempora...
research
02/13/2023

Magnetohydrodynamics with Physics Informed Neural Operators

We present the first application of physics informed neural operators, w...
research
06/27/2016

Can Turing machine be curious about its Turing test results? Three informal lectures on physics of intelligence

What is the nature of curiosity? Is there any scientific way to understa...
research
04/19/2021

Interpretability in deep learning for finance: a case study for the Heston model

Deep learning is a powerful tool whose applications in quantitative fina...
research
05/08/2020

Parsimonious neural networks learn classical mechanics, its underlying symmetries, and an accurate time integrator

Machine learning is playing an increasing role in the physical sciences ...

Please sign up or login with your details

Forgot password? Click here to reset