A Multi-resolution Theory for Approximating Infinite-p-Zero-n: Transitional Inference, Individualized Predictions, and a World Without Bias-Variance Trade-off

10/17/2020
by   Xinran Li, et al.
0

Transitional inference is an empiricism concept, rooted and practiced in clinical medicine since ancient Greece. Knowledge and experiences gained from treating one entity are applied to treat a related but distinctively different one. This notion of "transition to the similar" renders individualized treatments an operational meaning, yet its theoretical foundation defies the familiar inductive inference framework. The uniqueness of entities is the result of potentially an infinite number of attributes (hence p=∞), which entails zero direct training sample size (i.e., n=0) because genuine guinea pigs do not exist. However, the literature on wavelets and on sieve methods suggests a principled approximation theory for transitional inference via a multi-resolution (MR) perspective, where we use the resolution level to index the degree of approximation to ultimate individuality. MR inference seeks a primary resolution indexing an indirect training sample, which provides enough matched attributes to increase the relevance of the results to the target individuals and yet still accumulate sufficient indirect sample sizes for robust estimation. Theoretically, MR inference relies on an infinite-term ANOVA-type decomposition, providing an alternative way to model sparsity via the decay rate of the resolution bias as a function of the primary resolution level. Unexpectedly, this decomposition reveals a world without variance when the outcome is a deterministic function of potentially infinitely many predictors. In this deterministic world, the optimal resolution prefers over-fitting in the traditional sense when the resolution bias decays sufficiently rapidly. Furthermore, there can be many "descents" in the prediction error curve, when the contributions of predictors are inhomogeneous and the ordering of their importance does not align with the order of their inclusion in prediction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2018

Reconciling modern machine learning and the bias-variance trade-off

The question of generalization in machine learning---how algorithms are ...
research
12/23/2022

The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes

For small training set sizes P, the generalization error of wide neural ...
research
03/10/2022

Bias-variance decomposition of overparameterized regression with random linear features

In classical statistics, the bias-variance trade-off describes how varyi...
research
01/12/2023

Unbiased estimation and asymptotically valid inference in multivariable Mendelian randomization with many weak instrumental variables

Mendelian randomization (MR) is a popular epidemiological approach that ...
research
02/17/2023

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

In this paper we revisit the bias-variance decomposition of model error ...
research
02/18/2022

On Variance Estimation of Random Forests

Ensemble methods, such as random forests, are popular in applications du...
research
03/02/2021

Theory of Low Frequency Contamination from Nonstationarity and Misspecification: Consequences for HAR Inference

We establish theoretical and analytical results about the low frequency ...

Please sign up or login with your details

Forgot password? Click here to reset