A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

Black-box machine learning learning methods are now routinely used in high-risk settings, like medical diagnostics, which demand uncertainty quantification to avoid consequential model failures. Distribution-free uncertainty quantification (distribution-free UQ) is a user-friendly paradigm for creating statistically rigorous confidence intervals/sets for such predictions. Critically, the intervals/sets are valid without distributional assumptions or model assumptions, with explicit guarantees with finitely many datapoints. Moreover, they adapt to the difficulty of the input; when the input example is difficult, the uncertainty intervals/sets are large, signaling that the model might be wrong. Without much work, one can use distribution-free methods on any underlying algorithm, such as a neural network, to produce confidence sets guaranteed to contain the ground truth with a user-specified probability, such as 90 general, applying to many modern prediction problems arising in the fields of computer vision, natural language processing, deep reinforcement learning, and so on. This hands-on introduction is aimed at a reader interested in the practical implementation of distribution-free UQ, including conformal prediction and related methods, who is not necessarily a statistician. We will include many explanatory illustrations, examples, and code samples in Python, with PyTorch syntax. The goal is to provide the reader a working understanding of distribution-free UQ, allowing them to put confidence intervals on their algorithms, with one self-contained document.


page 3

page 11

page 16


Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets

The regulatory approval and broad clinical deployment of medical AI have...

Distribution-free binary classification: prediction sets, confidence intervals and calibration

We study three notions of uncertainty quantification—calibration, confid...

Uncertainty Quantification in Extreme Learning Machine: Analytical Developments, Variance Estimates and Confidence Intervals

Uncertainty quantification is crucial to assess prediction quality of a ...

Distribution-Free, Risk-Controlling Prediction Sets

While improving prediction accuracy has been the focus of machine learni...

Uncertainty Quantification for Demand Prediction in Contextual Dynamic Pricing

Data-driven sequential decision has found a wide range of applications i...

Conformal Methods for Quantifying Uncertainty in Spatiotemporal Data: A Survey

Machine learning methods are increasingly widely used in high-risk setti...

Locally Valid and Discriminative Confidence Intervals for Deep Learning Models

Crucial for building trust in deep learning models for critical real-wor...

Please sign up or login with your details

Forgot password? Click here to reset