Decision Theoretic Bootstrapping

03/18/2021
by   Peyman Tavallali, et al.
0

The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite; they are imperfectly known (and possibly distinct) when the data is finite (and possibly corrupted) and this uncertainty must be taken into account for robust Uncertainty Quantification (UQ). We present a general decision-theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset and a UQ subset (2) take m subsampled subsets of the training set and train m models (3) partition the UQ set into n sorted subsets and take a random fraction of them to define n corresponding empirical distributions μ_j (4) consider the adversarial game where Player I selects a model i∈{ 1,…,m}, Player II selects the UQ distribution μ_j and Player I receives a loss defined by evaluating the model i against data points sampled from μ_j (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to distributional shift in both the distribution of training data and that of the testing data (2) conditional probability distributions on the output space forming aleatory representations of the uncertainty on the output as a function of the input variable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2021

Imprecise Subset Simulation

The objective of this work is to quantify the uncertainty in probability...
research
06/10/2014

Generative Adversarial Networks

We propose a new framework for estimating generative models via an adver...
research
06/20/2022

Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments

We develop a new, principled algorithm for estimating the contribution o...
research
06/09/2022

Evaluating Aleatoric Uncertainty via Conditional Generative Models

Aleatoric uncertainty quantification seeks for distributional knowledge ...
research
08/29/2023

Two ways game-theoretic probability can improve data analysis

When testing a statistical hypothesis, is it legitimate to deliberate on...
research
12/23/2020

Probabilistic Iterative Methods for Linear Systems

This paper presents a probabilistic perspective on iterative methods for...
research
09/13/2020

Machine Learning's Dropout Training is Distributionally Robust Optimal

This paper shows that dropout training in Generalized Linear Models is t...

Please sign up or login with your details

Forgot password? Click here to reset