Exact Combinatorial Inference for Brain Images

07/08/2018
by   Moo K. Chung, et al.
University of Wisconsin-Madison
0

The permutation test is known as the exact test procedure in statistics. However, often it is not exact in practice and only an approximate method since only a small fraction of every possible permutation is generated. Even for a small sample size, it often requires to generate tens of thousands permutations, which can be a serious computational bottleneck. In this paper, we propose a novel combinatorial inference procedure that enumerates all possible permutations combinatorially without any resampling. The proposed method is validated against the standard permutation test in simulation studies with the ground truth. The method is further applied in twin DTI study in determining the genetic contribution of the minimum spanning tree of the structural brain connectivity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

12/17/2018

Rapid Acceleration of the Permutation Test via Slow Random Walks in the Permutation Group

The permutation test is an often used test procedure in brain imaging. U...
07/24/2018

Robust Group Comparison Using Non-Parametric Block-Based Statistics

Voxel-based analysis methods localize brain structural differences by pe...
08/30/2018

Permutation tests of non-exchangeable null models

Generalizations to the permutation test are introduced to allow for situ...
05/03/2022

Exact Paired-Permutation Testing for Structured Test Statistics

Significance testing – especially the paired-permutation test – has play...
07/13/2019

An Assumption-Free Exact Test For Fixed-Design Linear Models With Exchangeable Errors

We propose the cyclic permutation test (CPT) to test general linear hypo...
02/19/2021

Studentized Permutation Method for Comparing Restricted Mean Survival Times with Small Sample from Randomized Trials

Recent observations, especially in cancer immunotherapy clinical trials ...
07/01/2019

Permutation inference with a finite number of heterogeneous clusters

I introduce a simple permutation procedure to test conventional (non-sha...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The permutation test is perhaps the most widely used nonparametric test procedure in sciences [1, 7, 10]

. It is known as the exact test in statistics since the distribution of the test statistic under the null hypothesis can be exactly computed if we can calculate all possible values of the test statistic under every possible permutation. Unfortunately, generating every possible permutation for whole images is still extremely time consuming even for modest sample size.

When the total number of permutations are too large, various resampling techniques have been proposed to speed up the computation in the past. In the resampling methods, only a small fraction of possible permutations are generated and the statistical significance is computed approximately. This approximate permutation test is the most widely used version of the permutation test. In most of brain imaging studies, 5000-1000000 permutations are often used, which puts the total number of generated permutations usually less than 1% of all possible permutations. In [7], for instance, 1 million permutations out of possible permutations (0.07%) were generated using a super computer.

In this paper, we propose a novel combinatorial inference procedure that enumerates all possible permutations combinatorially and simply avoids resampling that is slow and approximate. Unlike the permutation test that takes few hours to few days in a desktop, our exact procedure takes few seconds. Recently combinatorial approaches for statistical inference are emerging as an powerful alternative to existing statistical methods [5, 1]. Neykov et al. proposed a combinatorial technique for graphical models. However, their approach still relies on bootstrapping, which is another resampling technique and still approximate [5]. Chung et al. proposed another combinatorial approach for brain networks but their method is limited to integer-valued graph features [1]. Our main contributions of this paper are as follows.

Figure 1: Monotone sequence is mapped to integers between 1 and without any gap via .

1) A new combinational approach for the permutation test that does not require resampling. While the permutation tests require exponential run time and approximate [1], our combinatorial approach requires run time and exact. 2) Showing that the proposed method is a more sensitive and powerful alternative to the existing permutation test through extensive simulation studies. 3) A new formulation for testing the brain network differences by using the minimum spanning tree differences. The proposed framework is applied to a twin DTI study in determining the heritability of the structural brain network.

2 Exact combinatorial inference

The method in this paper extends our previous exact topological inference [1], which is limited to integer-valued monotone functions from graphs. Through Theorem 2.1, we extend the method to any arbitrary monotone function.

Definition 1

For any sets and satisfying , function is strictly monotone if it satisfies . denotes the strict subset relation.

Theorem 2.1

Let be a monotone function on the nested set Then there exists a nondecreasing function such that .

Proof. We prove the statement by actually constructing such a function. Function is constructed as follows. Let . Then obviously Define an increasing step function such that

The step function is illustrated in Figure 1. Then it is straightforward to see that for all . Further is nondecreasing.

Consider two nested sets and We are interested in testing the null hypothesis of the equivalence of two monotone functions defined on the nested sets:

We have nondecreasing functions and on and respectively that satisfies the condition Theorem 2.1. We use psudo-metric

as a test statistic that measures the similarity between two monotone functions. The use of maximum removes the problem of multiple comparisons. The distribution of can be determined by combinatorially.

Theorem 2.2

where satisfies with the boundary condition within band and initial condition for .

Proof. Let and . From Theorem 2.1, the sequences and are monotone and integer-valued between 1 and without any gap. Perform the permutation test on the sorted sequences. If we identify each as moving one grid to right and as moving one grid to up, each permutation is mapped to a walk from to . There are total number of such paths and each permutation can be mapped to a walk uniquely.

Figure 2: In this example, is computed within the boundary (dotted red line) from (0,0) to (3,3).

Note if and only if Let be the total number of paths within (Fig. 2 for an illustration). Then it follows that is iteratively given as with within . Thus .

For example, is computed sequentially as follows (Fig. 2). We start with the bottom left corner and move right or up toward the upper corner.

. The probability is then

. The computational complexity of the combinatorial inference is for computing in the grid while the permutation test is exponential.

3 Inference on minimum spanning trees

As a specific example of how to apply the method, we show how to test for shape differences in minimum spanning trees (MST) of graphs. MST are often used in speeding up computation and simplifying complex graphs as simpler trees [9]. We et al. used MST in edge-based segmentation of lesion in brain MRI [9]. Stam et al. used MST as an unbiased skeleton representation of complex brain networks [6]

. Existing statistical inference methods on MST rely on using graph theory features on MST such as the average path length. Since the probability distribution of such features are often not well known, the permutation test is frequently used, which is not necessarily exact or effective. Here, we apply the proposed combinatorial inference for testing the shape differences of MST.

For a graph with nodes, MST is often constructed using Kruskal’s algorithm, which is a greedy algorithm with runtime . The algorithm starts with an edge with the smallest weight. Then add an edge with the next smallest weight. This sequential process continues while avoiding a loop and generates a spanning tree with the smallest total edge weights. Thus, the edge weights in MST correspond to the order, in which the edges are added in the construction of MST. Let and be the MST corresponding to connectivity matrices and . We are interested in testing hypotheses

The statistic for testing vs. is as follows. Since there are nodes, there are edge weights in MST. Let and be the sorted edge weights in and respectively. These correspond to the order MST are constructed in Kruskal’s algorithm. and are edge weights obtained in the -th iteration of Kruskal’s algorithm. Let and be monotone step functions that map the the edge weights obtained in the -th iteration to integer , i.e., and can be interpreted as the number of nodes added into MST in the -th iteration.

Figure 3: Randomly simulated correlation matrices with 0, 4 and 5 modules. The plot shows the number of nodes over the largest edge weights added into MST construction during Kruskal’s algorithm for 4, 5, 8, 10 and 0 modules.

4 Validation and comparisons

For validation and comparisons, we simulated the random graphs with the ground truth. We used nodes and images, which makes possible permutations to be exactly making the permutation test manageable. The data matrix is simulated as standard normal in each component, i.e., or equivalently each column is multivariate normal

with identity matrix as the covariance.

Combinatorial Permute 0.1% Permute 0.5% Permute 1%
0 vs. 0
4 vs. 4
4 vs. 5
4 vs. 8
5 vs. 10
Table 1: Simulation results given in terms of -values. In the case of no network differences (0 vs. 0 and 4 vs. 4), higher -values are better. In the case of network differences (4 vs. 5, 4 vs. 8 and 5 vs. 10), smaller -values are better.

Let . So far, there is no statistical dependency between nodes in . We add the following block modular structure to . We assume there are modules and each module consists of number of nodes. Then for the -th node in the -th module, we simulate

(1)

with . Subsequently, the connectivity matrix is given by . This introduces the block modular structure in the correlation network (Fig. 3). For 40 modules, each module consists of just 1 node, which is basically a network with 0 module.

Using (1), we simulated random networks with 4,5,8, 10 and 0 modules. For each network, we obtained MST and computed the distance between networks. We computed the -value using the combinatorial method. In comparison, we performed the permutation tests by permuting the group labels and generating 0.1, 0.5 and 1% of every possible permutation. The procedures are repeated 100 times and the average results are reported in Table 1.

In the case of no network differences (0 vs. 0 and 4 vs. 4), higher -values are better. The combinatorial method and the permutation tests all performed well for no network difference. In the case of network differences (4 vs. 5, 4 vs. 8 and 5 vs. 10), smaller -values are better. The combinatorial method performed far superior to the permutation tests. None of the permutation tests detected modular structure differences. The proposed combinatorial approach on MST seems to be far more sensitive in detecting modular structures. The performance of the permutation test does not improve even when we sample 10% of all possible permutations. The permutation test doesn’t converge rapidly with increased samples. The codes for performing exact combinatorial inference as well as simulations can be obtained from http://www.stat.wisc.edu/~mchung/twins.

Figure 4: Top: Correlation network of MZ- and DZ-twins and heritability index (HI). Bottom: Minimum spanning trees (MST) constructed using Kruskal’s algorithm on 1-correlation. Plot: The number of added nodes is plotted over the largest edge weights of MST for MZ- (solid red) and DZ-twins (dotted black) during the MST construction. The pseudo-metric is 46 at edge weight 0.75 (corresponding to correlation 0.25).

5 Application to twin DTI study

Subjects. The method is applied to 111 twin pairs of diffusion weighted images (DWI). Participants were part of the Wisconsin Twin Project [2]

. 58 monozygotic (MZ) and 53 same-sex dizygotic (DZ) twins were used in the analysis. We are interested in knowing the extent of the genetic influence on the structural brain network of these participants and determining its statistical significance between MZ- and DZ-twins. Twins were scanned in a 3.0 Tesla GE Discovery MR750 scanner with a 32-channel receive-only head coil. Diffusion tensor imaging (DTI) was performed using a three-shell diffusion-weighted, spin-echo, echo-planar imaging sequence. A total of 6 non-DWI (b=0 s

mm2) and 63 DWI with non-collinear diffusion encoding directions were collected at b=500, 800, 2000 (9, 18, 36 directions). Other parameters were TR/TE = 8575/76.6 ms; parallel imaging; flip angle = 90; isotropic 2mm resolution (128128 matrix with 256 mm field-of-view).

Image processing.

FSL were used to correct for eddy current related distortions, head motion and field inhomogeneity. Estimation of the diffusion tensors at each voxel was performed using non-linear tensor estimation in CAMINO. DTI-TK was used for constructing the study-specific template

[3]. Spatial normalization was performed for tensor-based white matter alignment using a non-parametric diffeomorphic registration method [11]. Each subject’s tractography was constructed in the study template space using the streamline method, and tracts were terminated at FA-value less than 0.2 and deflection angle greater than 60 degree [4]. Anatomical Automatic Labeling (AAL) with parcellations were used to construct structural connectivity matrix that counts the number of white matter fiber tracts between the parcellations [8].

Exact combinatorial inference. From the individual structural connectivity matrices, we computed pairwise twin correlations in each group using Spearman’s correlation. The resulting twin correlations matrices and (edge weights in Fig. 4) were used to compute the heritability index (HI) through Falconer’s formula, which determines the amount of variation due to genetic influence in a population: [1]. Although HI provides quantitative measure of heritability, it is not clear if it is statistically significant. We tested the significance of HI by testing the equality of and . We used and as edge weights in finding MST using Kruskal’s algorithm. This is equivalent to using and as edge weights in finding the maximum spanning trees. Fig. 4 plot shows how the number of nodes increase as the edges are added into the MST construction. At the same edge weights, MZ-twins are more connected than DZ-twins in MST. This implies MZ-twins are connected less in lower correlations and connected more in higher correlations.

Results. At edge weight 0.75, which is the maximum gap and corresponding to correlation 0.25, the observed distance was 46. The corresponding -value was computed as . The localized regions of brain that genetically contribute the most can also be identified by identifying the nodes of connections around edge weight 0.75 (). The following AAL regions are identified as the region of statistically significant MST differences: Frontal-Mid-L, Frontal-Mid-R, Frontal-Inf-Oper-R, Rolandic-Oper-R, Olfactory-L, Frontal-Sup-Medial-L, Frontal-Sup-Medial-R, Occipital-Inf-L, SupraMarginal-R, Precuneus-R, Caudate-L, Putamen-L, Temporal-Pole-Sup-L, Temporal-Pole-Sup-R, Temporal-Pole-Mid-R, Cerebelum-Crus2-R, Cerebelum-8-R, Vermis-8 (Fig. 5). The identified frontal and temporal regions are overlapping with the previous MRI-based twin study [7].

Figure 5: Regions corresponding to the maximum difference between MST of MZ- and DZ-twins.

6 Conclusion and discussion

We presented the novel exact combinatorial inference method that outperforms the traditional permutation test. The main innovation of the method is that it works for any monotone features. Given any two sets of measurements, all it requires is to sort them and we can apply the method. Thus, the method can be applied to wide variety of applications. For this study, we have shown how to apply the method in testing the shape differences in MST of the structural brain networks. The method was further utilized in localizing brain regions influencing such differences.

In graphs, there are many monotone functions including the number of connected components, total node degrees and the sorted eigenvalues of graph Laplacians. These monotone functions can all be used in the proposed combinatorial inference. The proposed method is also equally applicable to wide variety of monotonically decreasing graph features such as the largest connected components

[1]. If is monotonically decreasing, is monotonically increasing, thus the same method is applicable to decreasing functions. The applications of other features are left for future studies.

Acknowledgements

This work was supported by NIH grants R01 EB022856, R01 MH101504, P30 HD003352, U54 HD09025, UL1 TR002373. We thank Nagesh Adluru, Yuan Wang, Andrey Gritsenko (University of Wisconsin-Madison), Hyekyoung Lee (Seoul National University), Zachery Morrissey (University of Illinois-Chicago) and Sourabh Palande, Bei Wang (University of Utah) for valuable supports.

References

  • [1] M.K. Chung, V. Vilalta-Gil, H. Lee, P.J. Rathouz, B.B. Lahey, and D.H. Zald. Exact topological inference for paired brain networks via persistent homology. In Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science, volume 10265, pages 299–310, 2017.
  • [2] H.H. Goldsmith, K. Lemery-Chalfant, N.L. Schmidt, C.L. Arneson, and C.K. Schmidt. Longitudinal analyses of affect, temperament, and childhood psychopathology. Twin Research and Human Genetics, 10:118–126, 2007.
  • [3] J.L. Hanson, N. Adluru, M.K. Chung, A.L. Alexander, R.J. Davidson, and S.D. Pollak. Early neglect is associated with alterations in white matter integrity and cognitive functioning. Child Development, 84:1566–1578, 2013.
  • [4] M. Lazar, D.M. Weinstein, J.S. Tsuruda, K.M. Hasan, K. Arfanakis, M.E. Meyerand, B. Badie, H. Rowley, V. Haughton, A. Field, B. Witwer, and A.L. Alexander. White matter tractography using tensor deflection. Human Brain Mapping, 18:306–321, 2003.
  • [5] M. Neykov, J. Lu, and H. Liu. Combinatorial inference for graphical models. arXiv preprint arXiv:1608.03045, 2016.
  • [6] C.J. Stam, P. Tewarie, Van D.E., E.C.W. van Straaten, A. Hillebrand, and Van M.P. The trees and the forest: Characterization of complex brain networks with minimum spanning trees. International Journal of Psychophysiology, 92:129–138, 2014.
  • [7] P.M. Thompson, T.D. Cannon, K.L. Narr, T. van Erp, V.P. Poutanen, M. Huttunen, J. Lonnqvist, C.G. Standertskjold-Nordenstam, J. Kaprio, and M. Khaledy. Genetic influences on brain structure. Nature Neuroscience, 4:1253–1258, 2001.
  • [8] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer, and M. Joliot. Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage, 15:273–289, 2002.
  • [9] Z. Wu and R. Leahy. An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15:1101–1113, 1993.
  • [10] A. Zalesky, A. Fornito, I.H. Harding, L. Cocchi, M. Yücel, C. Pantelis, and E.T. Bullmore. Whole-brain anatomical networks: Does the choice of nodes matter? NeuroImage, 50:970–983, 2010.
  • [11] H. Zhang, P.A. Yushkevich, D.C. Alexander, and J.C. Gee. Deformable registration of diffusion tensor MR images with explicit orientation optimization. Medical Image Analysis, 10:764–785, 2006.