Large Datasets, Bias and Model Oriented Optimal Design of Experiments

11/30/2018
by   Elena Pesce, et al.
0

We review recent literature that proposes to adapt ideas from classical model based optimal design of experiments to problems of data selection of large datasets. Special attention is given to bias reduction and to protection against confounders. Some new results are presented. Theoretical and computational comparisons are made.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2020

Theoretical Models of Learning to Learn

A Machine can only learn if it is biased in some way. Typically the bias...
research
08/22/2022

Evaluating and Crafting Datasets Effective for Deep Learning With Data Maps

Rapid development in deep learning model construction has prompted an in...
research
04/24/2019

Bayesian leave-one-out cross-validation for large data

Model inference, such as model comparison, model checking, and model sel...
research
06/19/2021

Discussion on Competition for Spatial Statistics for Large Datasets

We discuss the experiences and results of the AppStatUZH team's particip...
research
09/09/2022

Fast and Accurate Importance Weighting for Correcting Sample Bias

Bias in datasets can be very detrimental for appropriate statistical est...
research
03/09/2023

R-Tuning: Regularized Prompt Tuning in Open-Set Scenarios

In realistic open-set scenarios where labels of a part of testing data a...
research
06/27/2019

Hierarchical Data Reduction and Learning

Paper proposes a hierarchical learning strategy for generation of sparse...

Please sign up or login with your details

Forgot password? Click here to reset