Interpretable Uncertainty Quantification in AI for HEP

08/05/2022
by   Thomas Y. Chen, et al.
0

Estimating uncertainty is at the core of performing scientific measurements in HEP: a measurement is not useful without an estimate of its uncertainty. The goal of uncertainty quantification (UQ) is inextricably linked to the question, "how do we physically and statistically interpret these uncertainties?" The answer to this question depends not only on the computational task we aim to undertake, but also on the methods we use for that task. For artificial intelligence (AI) applications in HEP, there are several areas where interpretable methods for UQ are essential, including inference, simulation, and control/decision-making. There exist some methods for each of these areas, but they have not yet been demonstrated to be as trustworthy as more traditional approaches currently employed in physics (e.g., non-AI frequentist and Bayesian methods). Shedding light on the questions above requires additional understanding of the interplay of AI systems and uncertainty quantification. We briefly discuss the existing methods in each area and relate them to tasks across HEP. We then discuss recommendations for avenues to pursue to develop the necessary techniques for reliable widespread usage of AI with UQ over the next decade.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2023

Fortuna: A Library for Uncertainty Quantification in Deep Learning

We present Fortuna, an open-source library for uncertainty quantificatio...
research
04/27/2020

Workshop on Quantification, Communication, and Interpretation of Uncertainty in Simulation and Data Science

Modern science, technology, and politics are all permeated by data that ...
research
06/02/2021

Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and Communicating the Uncertainty of AI

In this paper, we describe an open source Python toolkit named Uncertain...
research
01/28/2021

Uncertainty Quantification and Software Risk Analysis for Digital Twins in the Nearly Autonomous Management and Control Systems: A Review

A nearly autonomous management and control (NAMAC) system is designed to...
research
10/16/2017

Contributed Discussion to Uncertainty Quantification for the Horseshoe by Stéphanie van der Pas, Botond Szabó and Aad van der Vaart

We begin by introducing the main ideas of the paper under discussion. We...
research
05/30/2023

Probabilistic Computation with Emerging Covariance: Towards Efficient Uncertainty Quantification

Building robust, interpretable, and secure artificial intelligence syste...
research
08/06/2023

Building Safe and Reliable AI systems for Safety Critical Tasks with Vision-Language Processing

Although AI systems have been applied in various fields and achieved imp...

Please sign up or login with your details

Forgot password? Click here to reset