DeepAI AI Chat
Log In Sign Up

Explainable AI for High Energy Physics

by   Mark S. Neubauer, et al.

Neural Networks are ubiquitous in high energy physics research. However, these highly nonlinear parameterized functions are treated as black boxes- whose inner workings to convey information and build the desired input-output relationship are often intractable. Explainable AI (xAI) methods can be useful in determining a neural model's relationship with data toward making it interpretable by establishing a quantitative and tractable relationship between the input and the model's output. In this letter of interest, we explore the potential of using xAI methods in the context of problems in high energy physics.


Interpretability of an Interaction Network for identifying H → bb̅ jets

Multivariate techniques and machine learning models have found numerous ...

On the Relationship Between KR Approaches for Explainable Planning

In this paper, we build upon notions from knowledge representation and r...

Is Task-Agnostic Explainable AI a Myth?

Our work serves as a framework for unifying the challenges of contempora...

Magnetohydrodynamics with Physics Informed Neural Operators

We present the first application of physics informed neural operators, w...

Can Turing machine be curious about its Turing test results? Three informal lectures on physics of intelligence

What is the nature of curiosity? Is there any scientific way to understa...

Interpretability in deep learning for finance: a case study for the Heston model

Deep learning is a powerful tool whose applications in quantitative fina...

Parsimonious neural networks learn classical mechanics, its underlying symmetries, and an accurate time integrator

Machine learning is playing an increasing role in the physical sciences ...