Log In Sign Up

Statistical Inference with Ensemble of Clustered Desparsified Lasso

Medical imaging involves high-dimensional data, yet their acquisition is obtained for limited samples. Multivariate predictive models have become popular in the last decades to fit some external variables from imaging data, and standard algorithms yield point estimates of the model parameters. It is however challenging to attribute confidence to these parameter estimates, which makes solutions hardly trustworthy. In this paper we present a new algorithm that assesses parameters statistical significance and that can scale even when the number of predictors p > 10^5 is much higher than the number of samples n < 10^3 , by lever-aging structure among features. Our algorithm combines three main ingredients: a powerful inference procedure for linear models --the so-called Desparsified Lasso-- feature clustering and an ensembling step. We first establish that Desparsified Lasso alone cannot handle n p regimes; then we demonstrate that the combination of clustering and ensembling provides an accurate solution, whose specificity is controlled. We also demonstrate stability improvements on two neuroimaging datasets.


ECKO: Ensemble of Clustered Knockoffs for multivariate inference on fMRI data

Continuous improvement in medical imaging techniques allows the acquisit...

Generative discriminative models for multivariate inference and statistical mapping in medical imaging

This paper presents a general framework for obtaining interpretable mult...

Spatially relaxed inference on high-dimensional linear models

We consider the inference problem for high-dimensional linear models, wh...

Enumerating Multiple Equivalent Lasso Solutions

Predictive modelling is a data-analysis task common in many scientific f...

Online Debiasing for Adaptively Collected High-dimensional Data

Adaptive collection of data is increasingly commonplace in many applicat...

Statistical control for spatio-temporal MEG/EEG source imaging with desparsified multi-task Lasso

Detecting where and when brain regions activate in a cognitive task or i...

Multivariate Sparse Group Lasso Joint Model for Radiogenomics Data

Radiogenomics is an emerging field in cancer research that combines medi...

1 Introduction

Prediction problems in medical imaging are typically high-dimensional small-sample problems. Training such models can be seen as an inference procedure. As in all research fields, this inference has to come with probabilistic guarantees in order to assess its reliability and to clarify further interpretation. In such settings, linear models have raised a strong interest. In particular the Lasso, introduced in [Tibshirani94], has been thoroughly investigated in [Hastie_Tibshirani_Wainwright15]. Specifically, for settings in which the number of features is greater than the number of samples –though commensurate– numerous inference solutions have been proposed: see among others [buhlmann2013, Dezeure2015, Meinshausen2008, wasserman2009, Zhang_Zhang14]. However, when , these inference solutions are not scalable. In practice they fail to be informative, as we will show in our first simulation (Section 3.1

). Indeed, in these regimes, due to the curse of dimensionality, localizing statistical effects becomes much harder and an informative inference seems hopeless without dimensionality reduction. However, in high dimension, datasets often exhibit a particular data structure and inter-predictors correlation. This makes dimension reduction possible by the means of clustering algorithms which should respect data structure as described in


. The issue with clustering-based solutions is that clustering is almost surely suboptimal and unstable; it carries some arbitrariness related to initialization or estimators heuristics. A solution to mitigate this confounding factor is to embed it in a bagging strategy, as done in

Our contribution is an algorithm for statistical inference in high-dimensional scenarios, combining the procedure, first introduced in [Zhang_Zhang14], with clustering and bagging steps: the Ensemble of Clustered . We describe it in detail and provide experiments on simulated and real data to assess its potential on multivariate linear models for medical imaging.

2 Theoretical and Algorithmic Framework


For clarity, scalars are denoted with normal font, vectors with bold lowercase, and matrices with bold uppercase letters. For


Inference on Linear Models.

Our aim is to give confidence bounds on the coefficients of the parameter vector denoted in the following linear model:


where , and . The matrix is the design matrix, its columns are called the predictors, is called the response vector, is the noise (or the error vector) of the model and

is the (unknown) noise standard deviation. The signal to noise ratio (SNR), defined by

, is a measure that describes the noise regime in any given experiment. The true support is defined as and its size is . It is noteworthy that our problem is not a prediction problem, we are not aiming at finding minimizing , but an estimation problem, in which we want to control statistically.

for High-Dimensional Inference.

The (DL) estimator denoted , introduced in [Zhang_Zhang14]

, can be seen as a generalization of the Ordinary Least Squares (OLS) estimator for inference in

settings. Under some assumptions (notably the sparsity of ) that are made explicit in [Dezeure2015], has the following property:


where the diagonal of (estimated precision matrix; inverse of ) is computed concurrently with , as described in [Zhang_Zhang14]. From Equation 2

we can compute confidence intervals and p-values of the coefficients of the weight vector.

In [Dezeure2015], several high-dimensional inference solutions are discussed and compared. DL displays an interesting trade-off between good control of the family-wise error rate (FWER) and strong power. The FWER is defined as where FP is the number of false positives.

Clustering to Handle Structured High-Dimensional Data.

In high-dimensional inference, variables are often highly correlated. Specifically, a medical image has a 3D representation and a given voxel is highly correlated with its neighbors; obviously carries the same structure. In addition and make the statistical inference challenging without data structure assumptions as shown in [Dezeure2015]

. To leverage data structure, we introduce a clustering step that reduces data dimensionality before applying our inference procedure. Here, we consider a spatially-constrained hierarchical clustering algorithm described in

[Varoquaux2012] that uses Ward criterion while respecting the image geometrical structure. The combination of this clustering algorithm and the DL inference procedure will be referred to as the Clustered (CDL) algorithm.

Bagging to Alleviate Dependency on Clustering.

It is preferable not to rely on a particular clustering as small perturbations on it have a dramatic impact on the final solution. We followed the approach presented in [Varoquaux2012] that argues in favor of the randomization over a spatially-constrained clustering method: to build clusterings of the predictors, they use the same clustering method but with different random subsamples of size from the full sample.

Inference with Ensemble of Clustered .

Figure 1: To leverage inference procedure on medical images, our algorithm relies on feature clustering and an ensembling step on randomized solutions.

We now have all the elements to present our Ensemble of Clustered (ECDL) algorithm which is summarized in Figure 1. ECDL consists in repetitions of the CDL algorithm (using random subsamples of size for the clustering and the full sample for the DL procedure) and an ensembling step analogous to the bagging method introduced by [Breiman1996]. Once the CDL algorithm has been run times, we have partitions into clusters, each cluster being associated with a p-value. We denote by the p-value for the cluster in the fold. The p-value of the coefficient in the repetition is whenever belongs to cluster , we attribute the same p-value to all the predictors in a given cluster. This yields p-values for each coefficient. Finally, to ensemble the p-values one has to use specific techniques which ensure that the resulting p-value is meaningful as a frequentist hypothesis test. Thus, to derive the p-value of the coefficient, we have considered the ensembling solution presented in [Meinshausen2008] that has the required properties and consists in taking the median of multiplied by .

3 Simulation and Experimental Results

3.1 First Simulation: the Importance of Dimension Reduction


This simulation has a 1D structure and we set and . We construct the design matrix

such that predictors are normally distributed and two consecutive predictors have a fixed correlation

. The weight is such that for and otherwise, then . We also set such that (Section 3.3).

(a) (b)
Figure 2: (a) 95% coefficient intervals given by the raw (DL) fail to retrieve the true support. (b) 95% coefficient intervals given by the Clustered (CDL) are much narrower, and yield a good support accurately.


We compare the DL procedure applied to the uncompressed data, displayed in Figure 2-(a), and the CDL algorithm in Figure 2-(b). The number of clusters, whose impact will be discussed in Section 4, has been set to , allowing to reduce the dimension from to

before performing the inference. This reduction tames the estimator variance and yields useful confidence intervals that could not be reached by DL only.

3.2 Second Simulation: Improvement by Bootstrap and Aggregation


Here, we consider a simulation with a 3D structure, that aims at approximating the statistics of the Oasis experiment (Section 3.3). The volume considered is a 3D-cube with edge length , with samples and predictors (voxels). To construct , we define a 3D weight vector with five regions of interest (ROIs) represented in Figure 3-(a) and then make a bijective transformation of in a vector of size .

(a) 3D weight vector: (b) CDL (c) ECDL
Figure 3: In this simulation, comparing the original 3D weight vector with CDL and ECDL solutions, we observe that the ECDL solution is much more accurate.

Each ROI is a cube of width , leading to a size of support . Four ROIs are situated in corners of the cubic map and the last ROI is situated in the center of the cube. To construct , we first define the 3D design matrix from random normal vectors of size smoothed with 3D Gaussian filter with bandwidth (smoothing is performed across all predictors for each sample), then we use the same transformation as before and derive the design matrix. The choice is made to achieve similar correlations as for the Oasis experiment. We also set , (Section 3.3).

(a) (b)
Figure 4: (a) The precision-recall curve for the recovery of is much better adding an ensembling step over CDL. (b) FWER (nominal rate 5%) is well controlled by the ECDL algorithm while for high level of SNR it is not controlled by the CDL algorithm.


To derive the ECDL solutions we aggregated different CDL solutions during the ensembling step. To obtain the results presented in Figure 4, we ran simulations. In Figure 4-(a), we display the precision-recall curve (Scikit-learn precision_recall_curve function) of the solutions obtained by each algorithm with clusters. ECDL very strongly outperforms CDL: for precision of at least , the ECDL recall is while the CDL recall is only .
In order to check the FWER control, we define a neutral region that separates ROIs from the non-active region. Indeed, since the predictors are highly correlated, the detection of a null predictor in the vicinity of an active one is not a mistake. Thus, neutral regions enfold ROIs with a margin of 5 voxels. We compare different values of from to giving lying between and . In Figure 4-(b), one can observe that the FWER is always well controlled using ECDL; the later is even conservative since the empirical FWER stays at for a nominal level. On the opposite, the FWER is not well controlled by CDL: its empirical value goes far above the

rate for high SNR. This is due to the shape of the discovered regions that do not always correspond to the exact shape and location of ROIs. This effect is also observable watching thresholded Z-score maps yielded by CDL and ECDL in

Figure 3. By increasing the number of clusters, we would obtain discovered regions more similar to the true ROIs, yet their statistical significance would drop and the power would collapse.

3.3 Experiments on MRI Datasets

Haxby Dataset.

Haxby is a functional MRI dataset that maps the brain responses of subjects watching images of different objects (see [Haxby2001]). In our study we only consider the responses related to images of faces and houses for the first subject, to identify brain regions that discriminate between these two stimuli, assuming that this problem can be modeled as a regression problem. Here , k, (estimated) and we used and .

Oasis Dataset.

The Oasis MRI dataset (see [Marcus2007]) provides anatomical brain images of several subjects together with their age. The SPM voxel-based morphometry pipeline was used to obtain individual gray matter density maps. We aim at identifying which regions are informative to predict the age of a given subject. Here , k and (estimated) ; we also took and as in Section 3.2.

Figure 5: Results of the CDL and ECDL algorithms on Haxby (top) and Oasis (bottom) experiments. CDL algorithm outcomes are highly dependent on the clustering, which creates a jitter in the solution. Drawing consensus among many CDL results, ECDL removes the arbitrariness related to the clustering scheme.


The results of these experiments are displayed in Figure 5

with Z-transform of the p-values. For clarity, we thresholded the Z-score maps at


) keeping only the regions that have a high probability of being discriminative. The solutions given by the CDL algorithm with three different choices of clustering look noisy and unstable while the ECDL solution defines a synthesis of the CDL results and exhibits a nice symmetry in the case of Haxby. Thus, these results clearly illustrate that the ensembling step removes the arbitrariness due to the clustering.

Stability of Bagging Estimator.

Figure 6:

Correlation (left) and Jaccard Index (right) are much higher with the ECDL algorithm than with CDL across 25 replications of the analysis of the imaging datasets.

This last experiment quantifies the gain in stability when adding the ensembling step. From the two previous experiments, we derive ECDL solution maps (with ) and CDL solution maps and measure the variability of the results. Correlation between the full maps and Jaccard index of the detected areas (here, voxels with an absolute Z-score greater than ) show that ECDL is substantially more stable than CDL.

4 Discussion


We have introduced ECDL, an algorithm for high-dimensional inference on structured data which scales even when the number of predictors is much higher than the number of samples . It can be summarized as follows: i) perform repetitions of the CDL algorithm, that runs (DL) inference on a model compressed by clustering, yielding several p-values for each predictor; ii) use an ensemble method aggregating all p-values to derive a single p-value per predictor. In Section 3.1, we have shown that the clustering step, justified by specific data structures and locally high inter-predictor correlation, was necessary to yield an informative inference solution when . Then, we have demonstrated, in Section 3.2, that randomizing and bagging the CDL solution improves the control of the FWER and the precision-recall curve. While the ensembling step obviously removes the arbitrariness of the clustering choice, in Section 3.3 we showed it also increases stability.

ECDL Parameter Setting.

The number of clusters is the main free parameter, and an optimal value depends on characteristics of the data (inter-predictor correlation, SNR). In our simulations, we observe a bias/variance trade-off: a small number of clusters reduces variance and enhances statistical power, while a greater number yields refined solutions. The ensembling step helps improving shape accuracy without loss in sensitivity, as the combination of multiple CDL solutions recovers finer spatial information.

Computational Cost of ECDL.

The most expensive step is the DL inference, which includes the resolution of Lasso problems with samples and features. This is repeated times in ECDL, making it embarrassingly parallel; we could run the ECDL algorithm on standard desktop stations with , and in less than 10 minutes.

Additional Work Related to ECDL.

In Haxby experiment (Section 3.3) we approximated the problem as a regression one. Thus, an interesting extension would be to adapt ECDL to classification settings. Another matter is the comparison with bootstrap and permutation-based approaches [Gaonkar2012], that we leave for future work. Note however that, in [Dezeure2015], a study of bootstrap approaches points out some unwanted properties and they do not outperform DL.

Usefulness for Medical Imaging.

For structured high-dimensional data, our algorithm is relevant to assess the statistical significance of a set of predictors when fitting a target variable. Our experimental results show that ECDL is very promising for inference problems in medical imaging.


This research was supported by Labex DigiCosme (project ANR-11-LABEX-0045-DIGICOSME) operated by ANR as part of the program ”Investissement d’Avenir” Idex Paris Saclay (ANR-11-IDEX-0003-02). This work is also funded by the FAST-BIG project (ANR-17-CE23-0011).