Statistical Agnostic Mapping: a Framework in Neuroimaging based on Concentration Inequalities

12/27/2019
by   J M Gorriz, et al.
33

In the 70s a novel branch of statistics emerged focusing its effort in selecting a function in the pattern recognition problem, which fulfils a definite relationship between the quality of the approximation and its complexity. These data-driven approaches are mainly devoted to problems of estimating dependencies with limited sample sizes and comprise all the empirical out-of sample generalization approaches, e.g. cross validation (CV) approaches. Although the latter are not designed for testing competing hypothesis or comparing different models in neuroimaging, there are a number of theoretical developments within this theory which could be employed to derive a Statistical Agnostic (non-parametric) Mapping (SAM) at voxel or multi-voxel level. Moreover, SAMs could relieve i) the problem of instability in limited sample sizes when estimating the actual risk via the CV approaches, e.g. large error bars, and provide ii) an alternative way of Family-wise-error (FWE) corrected p-value maps in inferential statistics for hypothesis testing. In this sense, we propose a novel framework in neuroimaging based on concentration inequalities, which results in (i) a rigorous development for model validation with a small sample/dimension ratio, and (ii) a less-conservative procedure than FWE p-value correction, to determine the brain significance maps from the inferences made using small upper bounds of the actual risk.

READ FULL TEXT

page 6

page 9

page 10

page 12

page 14

page 16

research
10/30/2010

Concentration inequalities of the cross-validation estimator for Empirical Risk Minimiser

In this article, we derive concentration inequalities for the cross-vali...
research
06/23/2017

Cross-validation failure: small sample sizes lead to large error bars

Predictive models ground many state-of-the-art developments in statistic...
research
03/03/2020

Error bounds in estimating the out-of-sample prediction error using leave-one-out cross validation in high-dimensions

We study the problem of out-of-sample risk estimation in the high dimens...
research
05/25/2022

Mitigating multiple descents: A model-agnostic framework for risk monotonization

Recent empirical and theoretical analyses of several commonly used predi...
research
05/28/2021

Optimality of Cross-validation in Scattered Data Approximation

Choosing models from a hypothesis space is a frequent task in approximat...
research
06/16/2020

Reverse Lebesgue and Gaussian isoperimetric inequalities for parallel sets with applications

The r-parallel set of a measurable set A ⊆ℝ^d is the set of all points w...
research
10/09/2015

Some Theory For Practical Classifier Validation

We compare and contrast two approaches to validating a trained classifie...

Please sign up or login with your details

Forgot password? Click here to reset