1 Motivation and Introduction
It is of substantial interest to have valid statistical methods for inference on covariance matrices available, for at least two major reasons. The first one is that a treatment effect may indeed best be described by a particular configuration of scale or covariance parameters – not by a mean difference. The second reason corresponds to a more indirect purpose, namely that the main interest of the investigation may be described by a location change under alternative, but some of the available inference methods for location effects rely on assumptions regarding variances or covariances that need to be assessed reliably. In either situation, a statistical test about hypotheses that are formulated in terms of covariance matrices is necessary. From a methodological point of view, such a test shall not make too many restrictive assumptions itself, for example regarding underlying distributions. Furthermore, it shall perform well for moderate sample sizes, where clearly the term moderate will have to be seen in connection with the number of parameters effectively being tested.
Considering the central importance and the widespread need for hypothesis tests on covariance matrices, it may come as a surprise that a general and unifying approach to this task has not been developed thus far. There are several tests for specialized situations, such as testing equality of variances or even covariance matrices. Many of these approaches will be mentioned below. However, they typically only address one particular question, and they often rely on restrictive distributional assumptions, such as normality (e.g. in Box (1953) and Anderson (1984)), elliptical distributions (e.g. in Muirhead, 1982 and Fang and Zhang (1990)
), or conditions on the characteristic functions (e.g. inGupta and Xu (2006)).
One exception is the test of Zhang and Boos (1993)
which theoretically allows for testing a multitude of hypotheses without restrictive distributional conditions. Unfortunately, the small and medium sample performance of this procedure is comparatively poor, in particular regarding the power. Their technique to improve the performance requires a more restrictive null hypothesis that additionally postulates equality of certain moments. This makes it somewhat difficult to use this approach in practice as a rejection does not mean that the covariances are unequal.
The goal of the present article is to introduce a very general approach to statistical hypothesis testing where the hypotheses are formulated in terms of covariance matrices. This includes as special cases, for example, hypotheses formulated using their traces, hypotheses of equality of variances or of covariance matrices, and hypotheses in which a covariance matrix is assumed to have particular entries. The test procedures are based on a resampling approach whose asymptotic validity is shown theoretically, while the actual finite sample performance has been investigated by means of extensive simulation studies. Analysis of a real data example illustrates the application of the proposed methods.
In the following section, the statistical model and (examples for) different null hypotheses that can be investigated using the proposed approach will be introduced. Thereafter, the asymptotic distributions of the proposed test statistics are derived (Section 3) and proven to be regained by two different resampling strategies (Section 4
). The simulation results regarding type-I-error control and power are discussed in Section5 while an illustrative data analysis of EEG-data is conducted in Section 6. All proofs are deferred to a technical Appendix.
2 Statistical Model and Hypotheses
We consider a general semiparametric model given by independent
-dimensional random vectors
Here, the index refers to the treatment group and to the individual, on which -dimensional observations are measured. In this setting, denotes the -th group mean while the residuals are assumed to be centered and i.i.d. within each group, with finite fourth moment . Beyond this, no other distributional assumptions are presumed. In particular, the covariance matrices may be arbitrary and do not even have to be positive definite. For convenience, we aggregate the individual vectors into as well as . Stacking the covariance matrices into the -dimensional vector
containing the upper triangular entries of we formulate hypotheses in terms of the pooled covariance vector as
Here, denotes a suitable hypothesis matrix of interest, and is a fixed vector. It should be noted that we neither assume that is a contrast matrix, not to mention a projection matrix. This is different to the frequently used hypotheses formulation about mean vectors in MANOVA designs (Konietschke et al. (2015), Friedrich et al. (2017), Bathke et al. (2018)), where one can usually work with a unique projection matrix.
However, working with simpler contrast matrices (as we do) can help to save considerable computation time, see Remark 2.1 below.
In order to discuss some particular hypotheses included within the general setup (2), we fix the following notation:
Let be the -dimensional unit matrix, the -dimensional column vector of 1’s and the -dimensional matrix of 1’s. Furthermore, denotes the -dimensional centering matrix, while and denote direct sum and Kronecker product, respectively. Then the following null hypotheses of interest are covered:
(a) Testing equality of variances: For a univariate outcome with , testing the null hypothesis
of equal variances is included within (2) by setting
and . Hypotheses of this type have been studied by Bartlett and Rajalakshman (1953) as well as Boos and Brownie (2004), Gupta and Xu (2006), and Pauly (2011), among others. In the special case of a two-armed design with , this is also the null hypothesis inferred by the popular F-ratio test which, however, is known to be sensitive to deviations from normality Box (1953).
(b) Testing for a given covariance matrix: Let be a given covariance matrix. It may represent, for example, an autoregressive or compound symmetry covariance structure. For , our general formulation also covers testing the null hypothesis
by setting and .
Hypotheses of this kind have been studied by Gupta and Xu (2006).
(c) Testing homogeneity of covariance matrices: More general than in (a), let and for arbitrary . Then (2) describes the null hypothesis
Beyond the above choices, in (2) even contains hypotheses about linear functions of matrices.
To this end, set and consider the following examples:
(d) Traces as effect measures: Suppose we are interested in the total variance of all components as a univariate effect measure for each group. This may be an advantageous approach in terms of power, as illustrated in the data example analysis below. Then, their equality
can be tested by choosing and .
(e) Testing for a given trace: Consider the situation of Example (d) with just one group . We then may be interested in testing for a given value of the trace, i.e.
Thereto we chose and .
(f) Higher Way Layouts:
Moreover, we can even infer hypotheses about variances, covariance matrices, or traces in arbitrarily crossed multivariate layouts by splitting up indices. For example, consider a two-way cross-classified design with fixed factorsand whose levels are and , respectively. Assume that the interest lies in measuring, for example, their effect on the total variance, that is, the trace (a similar approach works for variances and covariances). Setting we observe subjects for each factor level combination . To formulate hypotheses of no main trace effects for each factor, as well as hypotheses of no interaction trace effects we write with the usual side conditions . Here, for example, can be interpreted as the part of the total variance under factor level by factor A. Then, the choice and leads to a test for no main effect of factor (measured in the above trace effects),
while and result in the hypothesis of no interaction (again measured in trace effects) between the factors and ,
Although in each of the used scenarios it is possible to find an idempotent symmetric hypothesis matrix , the option allows for matrices which are neither symmetric nor idempotent. From a theoretical point of view, this does not really matter. However, from a practical point of view, the choice of the hypothesis matrix may actually have a great effect with regard to saving computation time. To this aim we allow with . For example could also be formulated by . It is clear that with this approach all our results can be used. In our simulations it was thereby possible to save of the computing time, depending on the hypothesis and the setting. For dimension , the time savings would amount to which shows the value of this approach.
In the subsequent sections we develop testing procedures for in (2) and thus for all given examples (a)–(f) above. The basic idea is to use a quadratic form in the vector
of estimated and centered effects. For ease of presentation and its widespread use in our setting (with), we thereby focus on empirical covariance matrices
as estimators for , where . Other choices, as, for example, surveyed in Duembgen et al. (2013), may be part of future research.
Thereby, inverting the resulting test procedures will lead to confidence regions
about the effect measures of interest. For example, in case (e), we may obtain confidence intervals for the unknown trace.
3 The Test Statistics and their Asymptotics
In order to obtain the mentioned inference procedures which are formulated using quadratic forms, we first have to study the asymptotic behaviour of the normalized dimensional vector , where is the pooled empirical covariance estimator of . For convenience, we thereby assume throughout that the usual asymptotic sample size condition holds, namely, as :
As holds for all , we have if and only if . Under this framework, we obtain the first preliminary result towards the construction of proper test procedures.
Suppose Assumption (A1) holds. Then, as , we have convergence in distribution
where and for .
Together with a consistent estimator for (all or certain parts of) , this result allows us to develop asymptotic tests for the null hypothesis (2
). Thereby, potential test statistics may lean on well-known quadratic forms used for mean-based MANOVA-analyses in heteroscedastic designsKonietschke et al. (2015), Bathke et al. (2018). In particular, two natural candidates are given by test statistics of ANOVA- or Wald-type:
Here, denotes the Moore-Penrose-inverse of a matrix , and with
is the empirical estimator for . Moreover, denotes the centered version of observation in group .
Another possibility would be to transfer the modified ANOVA-type statistic from Friedrich and Pauly (2017) to the present setup which has also been recommended for heteroscedastic MANOVA. However, in our setting, the modified ANOVA-type statistic did not show better finite sample properties in simulations, see the supplement for details.
Now, in order to define asymptotically correct tests for the null hypothesis based on the proposed test statistics (3) and (4), we first have to analyze their asymptotic properties. To this end, we first prove consistency of the involved matrix .
The estimator converges almost sure to as , as well as to .
Under Assumption (A1) and the null hypothesis , the following results hold.
(a) The statistic defined in (3) has asymptotically a “weighted -distribution”, that is, as we have
This result allows the definition of a natural test procedure in the WTS given by . However, the additional condition , ensuring asymptotic correctness of , may not always be satisfied in practice.
Since this condition is not needed in the first part of the theorem, we merely focus on the ANOVA-type statistic in what follows. As its limit distribution depends on unknown quantities, we cannot calculate critical values from (5) directly. To this end, we employ resampling techniques for calculating proper critical values. We thereby focus on two resampling procedures: a parametric and a wild bootstrap as both methods have shown favorable finite sample properties in multivariate mean-based MANOVA (Konietschke et al. (2015), Friedrich et al. (2016), Friedrich and Pauly (2017), and Zimmermann et al. (2019)). That these procedures also lead to valid testing procedures in the current setting is proven in the subsequent section.
4 Resampling Procedures
To derive critical values for the non-pivotal , we consider two common kinds of bootstrap techniques: a parametric and a wild bootstrap as applied for heteroscedastic MANOVA. Since we deal with covariances instead of expectations, some adjustments have to be made, in order to prove their asymptotic correctness.
4.1 Parametric Bootstrap
To motivate our first resampling strategy, note that
follows from the proof of Theorem 3.1.
Thus, to mimick its limit and afterwards the structure of the test statistic, we generate bootstrap vectors for given realisations with estimators . We then calculate and , the parametric bootstrap versions of the estimators and , respectively. The next theorem ensures the asymptotic correctness of this approach.
Under Assumption (A1), the following results hold:
in probability. Moreover we have
(a) For , the conditional distribution of , given the data, converges weakly to
(b) The conditional distribution of , given the data, converges weakly to
in probability. Moreover we have in probability.
in probability. Moreover we havein probability.
As a consequence, it is reasonable to calculate the bootstrap version of the ATS as
In fact, its conditional distribution weakly approximates the null distribution of the in probability, as stated below.
For each parameter and with , we have under Assumption (A1) that
where denotes the (un)conditional distribution of the test statistic when is the true underlying vector.
Denoting with the
-quantile of the conditional distribution ofgiven the data, we obtain as asymptotic level test.
Beyond being necessary to carry out an asymptotic level test in the , resampling can also be used to enhance the finite sample properties of the . In fact, utilizing Theorem 4.1 shows that a parametric bootstrap version of the , say , is also asymptotically -distributed, under the assumption given in Theorem 3.1. Thus, it leads to a valid parametric bootstrap -test as long as for all .
4.2 Wild Bootstrap
As a second resampling approach, we consider the Wild Bootstrap. In contrast to its application in the mean-based analysis where the realizations are multiplied with convenient wild bootstrap multipliers, we have to multiply them with dimensional random vectors of the kind , to ensure asymptotic correctness due to (6).
Specifically, generate i.i.d. random weights , independent of the data, with and . Common choices are for example standard distributed random variables or random signs.
Afterwards the wild bootstrap sample is defined as , where again centering is needed to capture the correct limit structure.
Setting and , we obtain the following theorem.
Under Assumption (A1), the following results hold:
(a) For , the conditional distribution of , given the data converges weakly to in probability. Moreover it holds that in probability.
(b) The conditional distribution of , given the data converges weakly to
in probability. Moreover we have in probability.
The result gives rise to define a wild bootstrap counterpart given by
Similar to the parametric bootstrap, the next theorem guarantees the approximation of the original test statistic by its bootstrap version.
Under the assumptions of creftypecap 4.2, we have convergence
Therefore, analogous to , we define as asymptotic level test, with denoting the quantile of the conditional distribution of given the data.
Similar wild bootstrap versions of the or comparable statistics can again be defined and used to calculate critical values if is fulfilled, see Section 5 below for the WTS and the supplement for another, less favorable, possibility.
The above results are valid for large sample sizes. For an evaluation of the finite sample behavior of all methods introduced above, we have conducted extensive simulations regarding
their ability in keeping the nominal significance level and
their power to detect certain alternatives in various scenarios.
In particular, we studied three different kinds of hypotheses:
Equal Covariance Matrices: with groups.
Equal Diagonal Elements: in the one sample case.
Trace Test: for a given and .
Each of these hypotheses can be formulated with a proper projection matrix although in the last case. While and follows directly from section 2, is an adaptation of .
For every hypothesis, we have simulated the two bootstrap methods based on the ANOVA-type statistic and , as well as the Wald-type statistic , . The latter ones are based on the parametric bootstrap version of the WTS, given by
and the wild bootstrap version given by
Moreover, the asymptotic version based upon the -approximation serves as another competitor.
In the special case of scenario we have also considered the tests from Zhang and Boos (1992, 1993) based on Bartlett’s test statistic, along with a so-called separate bootstrap as well as a pooled bootstrap to calculate critical values. We denote these tests by and . While the first is asymptotically valid under the same conditions as our tests, the pooled bootstrap procedure additionally requires equality .
To examine the impact of deviations from this condition, we have also considered a setting where individual groups have different distributions and this assumption, therefore, is violated. Additionally, we have simulated Box’s M-test as it is the most popular test for scenario , although it requires normally distributed data. There are two common ways to determine critical values for this test, Box (1949): Utilizing a -approximation with degrees of freedom or an -approximation with estimated degrees of freedom. For ease of completeness, we decided to simulate both.
On an abstract level, the hypotheses considered thus far also fall into the framework presented by Zhang and Boos (1993). However, they do not provide concrete test statistics that we could use for comparison purposes. Other existing tests such as the one by Gupta and Xu (2006) rely on rather different model assumptions which also makes a comparative evaluation difficult. All simulations were conducted by means of the R-computing environment version 3.6.1 R Core Team (2019) with runs, 1000 bootstrap runs and .
We considered -dimensional observations generated independently according to the model with and . Here, the marginals of were either simulated independent from
a centered exponential distribution with parameter, i.e. ,
a standardized student -distribution with degrees of freedom, i.e.
a standard normal distribution, i.e.
For the covariance matrix, a compound symmetry structure was chosen. Results from further simulations with different covariance matrices can be found in the supplement.
Note that the chosen dimension of leads to an effective dimension of of the unknown parameter (i.e. covariance matrix) in each group. Hence in scenario , the vector defining the null hypothesis (2) actually consists of unknown parameters. To address this quite large dimension, we considered three different small to moderate total sample sizes of . Moreover, in scenario these were divided into two groups by setting and . Thus, we had between 20 and 120 independent observations to estimate the unknown covariance matrix in each group.
5.1 Type-I error
The following tables display the simulated type-I error rates for all these settings.
|Box’s M -||.3705||.4310||.4699||.0866||.0863||.0969||.0523||.0517||.0534|
In almost all simulation settings, the parametric bootstrap led to more conservative results, whereas the wild bootstrap was always more liberal.
Apart from that, the quality of the bootstrap was rather comparable with a slight advantage of the parametric bootstrap.
Overall, the results of the ATS were preferable to the WTS.
This matches the conventional wisdom that the WTS generally exhibits a rather liberal behavior and requires large sample sizes to perform well. Moreover, the WTS requires the condition on the rank of which is difficult to check in practice, because of the special structure of .
In contrast, the ATS is capable to handle all these scenarios, including random variables originating from different distribution families, which is typically one of the most challenging settings when testing homogeneity.
Therefore, it remains to compare these tests with the statistics based on Bartlett’s statistic.
First, we consider the situation with equal distributions, which can be seen in table 1 to have an arrangement were the conditions of all procedures are fulfilled.
For bigger sample sizes, the type I-error rate of the separated bootstrap seemed comparable to the ATS with the parametric bootstrap, while the pooled bootstrap was a bit more conservative. But as already discussed in Zhang and Boos (1993) for small sample sizes, the -level of and was generally far too low. Here, seemed to perform slightly better, but at the expense of a smaller null hypothesis. Moreover, for larger sample sizes this benefit in performance disappeared. Except for the exponential distribution, the ATS with the parametric bootstrap had a comparable approximation with the pooled bootstrap for significant smaller sample size and moreover does not require the additional conditions on moments. The popular Box’s M-test worked quite well under normality but showed poor results (type-I error rates of more than 20%) when this condition was violated. This sensitivity to the moment assumption (or its violation) may have the consequence in practice that small p-values could be untrustworthy, independent of whether
or F distribution was used. Despite these well-known difficulties, Box’s M- test is still often used by practitioners.
Table 2 shows the performance in the more challenging case where the underlying distribution in the groups was from different families, and therefore the assumption was violated. For all of our test statistics, the use of mixed distributions had nearly no systematic influence at all: some values got better, and other values got worse. This is not surprising because mixed distributions naturally make the situation more complicated, even for tests which allow for such circumstances. Similarly, there were variations in the type-I error rates of , but no clear tendencies.
While for the first two mixed distributions also seemed quite robust against the violation of the additional condition, for the case of a normal distribution in one group and an exponential distribution in the other group the quality clearly decreased.
Hereby the error rate for increasing samples sizes differed more and more from our level. Despite the unexpectedly good performance for the first two mixed distributions the behavior in the last setting indicates that this procedure has an essential disadvantage.
The last row shows results for Box’s M test which had again error rates making it almost useless in this case.
The resampling procedure used in Zhang and Boos (1993) occasionally encountered covariance matrices without full rank, especially for smaller sample sizes. This creates issues in the algorithm because the determinant of these matrices is zero and the logarithm at this point is not defined. Regretfully this situation wasn’t discussed in the original paper, so we just excluded these values. Certainly, this would constitute a drastic user intervention in applying the bootstrap and also influencing the conditional distribution. Nevertheless, it was necessary to use this adaptation in all our simulations containing these tests.
This effect can also occur in Box’s M-test, but comparatively rarely because there is no bootstrap involved.
All in all in scenario the and exhibited the best performance when we considered all distributions and in particular small sample sizes. Because of the more liberal results of and the more conservative result of , it would be obvious to combine them to get a test which holds the level even better. But this would require the computation time of both tests together.
This combination would also be an option for scenario because again the ATS with parametric bootstrap was a bit too conservative and the ATS with wild bootstrap was slightly too liberal. But in this case, the wild bootstrap had the best results, even for small sample sizes. Again the type-I error rates of the WTS were too high while also the wild bootstrap did not yield sufficient improvement for smaller samples sizes. Here, only the parametric bootstrap version showed considerable improvements regarding the distribution for the WTS.
For scenario all tests considered had the same liberal behavior. Because of the fact that even and take exactly the same values as well as and . For better comparability in the simulation, both approaches WTS and ATS used the same observations as well as identical bootstrap samples.
The presumable reason for the required larger sample size is also the small rank of the hypothesis matrix. Calculating the trace of the covariance matrix, just a really small part of the matrix is used and therefore a low proportion of the information contained in the data can be utilized.
The effect of other types of covariance matrices, which is considered in the appendix, was not significant and not systematic. Therein, we also investigated testing for a given covariance matrix. Here, only the type-I error rate of the ANOVA-type statistic with critical values obtained from the parametric bootstrap showed satisfying results.
To sum up, we only recommend the use of and as both led to good results for comparably small sample sizes and are (asymptotically) valid without additional requirements on . The additional simulations in the appendix also confirmed this. The fact that for some hypotheses, such as testing for homogeneity, was a bit conservative while was a bit liberal, allows some freedom to choose a test according to personal preferences.
For a power simulation, it is unfortunately not possible to merely shift the observations by a proper vector to control the distance from the null hypothesis. Thereto we have multiplied the observation vectors with a proper diagonal matrix, given by for . This was associated with a one-point-alternative that is known from testing expectation vectors to be challenging, namely a deviation in just one component, which is usually difficult to detect. In this way , were was used to investigate small size behavior, and for moderate samples sizes, while the dimension was again , leading to . Due to computational reasons and because of the performance under null hypotheses described in the last section, we have only investigated the power of and , as well as and from Zhang and Boos (1993) .
The wild bootstrap clearly showed higher power than the parametric one. However, this was due to the more liberal behavior, as seen in the previous section. Moreover, while for small sample size the pooled bootstrap had higher power than the separate bootstrap, the opposite was true for moderate sample sizes.
Overall the ATS had substantially higher power for detecting a fixed alternative in the whole range of , independent of the chosen bootstrap technique, especially for small sample sizes. Other distributions together with another hypothesis are investigated in an additional simulation study, to be found in the appendix In nearly every case these tests showed similar behavior, for small sample size and moderate sample size, with a clear benefit of the ATS, in particular with the wild bootstrap which had quite a high power. For the exponential distribution and moderate sample size, in some cases, due to its conservative behavior, needed values larger than in order to achieve the same power level than the two Bartlett test statistics. Moreover for an exponential distribution together with a trend alternative, had slightly lower power than Bartlett’s test, for larger -values. Again over all distributions and alternatives, ATS with both bootstrap techniques had clear advantages regarding the power, especially for the wild bootstrap. Both bootstrap approaches showed a convincing small sample behavior.
6 Illustrative Data Analysis
To demonstrate the use of the proposed methods, we have re-analyzed neurological data on cognitive impairments. In Bathke et al. (2018) the question was examined whether EEG- or SPECT-features are preferable to differentiate between three different diagnoses of impairments - subjective cognitive complaints (SCC), mild cognitive impairment (MCI), and Alzheimer disease (AD). The corresponding trial was conducted at the University Clinic of Salzburg, Department of Neurology. Here 160 patients were diagnosed with either AD, MCI, or SCC, based on neuropsychological diagnostics, as well as a neurological examination. This data set has been included in the R-package manova.rm by Sarah Friedrich (2019). The following table contains the number of patients divided by sex and diagnosis.
For each patient, different kinds of EEG variables were investigated which leads to variance and covariance parameters. As the male AD and SCC group only contain and observations, respectively, an application of the WTS would not be possible.
In Bathke et al. (2018) the authors descriptively checked the empirical covariances matrices to judge that the assumption of equal covariance matrices between the different groups is rather unlikely. However, this presumption has not been inferred statistically. To close this gap, we first test the null hypothesis of equal covariance matrices between the six different groups using the newly proposed methods. Applying the ATS with parametric resp. wild bootstrap led to p-values of and .
In comparison, the Bartlett-S test of Zhang and Boos (1993) led to a -value of , potentially reflecting its bad power observed in Section 5 and also by the authors. Moreover, their Bartlett-P test for the smaller null hypothesis (additionally postulating equality of vectorized moments) shows a small -value of .
As a next step, we take the underlying factorial structure of the data into account and test, for illustrational purposes, the following hypotheses:
Homogeneity of covariance matrices between different diagnoses,
Homogeneity of covariance matrices between different sexes,
Equality of total variance between different diagnosis,
Equality of total variance between different sexes.
For the first both hypotheses we calculated the ATS with wild and parametric bootstrap as well as Bartlett’s test statistic with separate and pooled bootstrap. Considering the trace just the first two tests are applicable, and in all cases, the one-sided tests are used based on 10.000 bootstrap runs. The results are presented in table 6 and table 7.
|male||AD vs. MCI||0.1000||0.0282||0.1744||0.0184|
|male||AD vs. SCC||>0.0001||>0.0001||0.0545||0.0634|
|male||MCI vs. SCC||0.8767||0.9801||0.1388||0.0078|
|female||AD vs. MCI||0.0613||0.0559||0.1050||0.1480|
|female||AD vs. SCC||0.0128||0.0095||0.0138||0.0183|
|female||MCI vs. SCC||0.5656||0.6004||0.8964||0.8988|
|AD||male vs. female||0.1008||0.0274||0.2482||0.0542|
|MCI||male vs. female||0.2455||0.2417||0.3695||0.4003|
|SCC||male vs. female||0.2066||0.1914||0.2656||0.1648|
|male||AD vs. MCI||0.0733||0.0635|
|male||AD vs. SCC||>0.0001||>0.0001|
|male||MCI vs. SCC||0.6146||0.6297|
|female||AD vs. MCI||0.0074||0.0091|
|female||AD vs. SCC||0.0006||0.0012|
|female||MCI vs. SCC||0.3687||0.3811|
|AD||male vs. female||0.0881||0.0834|
|MCI||male vs. female||0.1582||0.1592|
|SCC||male vs. female||0.3423||0.3744|
It is noticeable that both tests based on the ATS cleary reject the null hypothesis of equal covariances for AD and SCC for both sexes at level , while the p-values of both Bartlett’s tests are not significant.
An explanation for this combination with less samples may be given by the good small sample performance of the ATS observed in section 5 and the rather low power of Bartlett’s test statistic which was already mentioned in Zhang and Boos (1993). Moreover, the only cases where both Bartlett’s test-statistics have smaller p- values are for the combination with the largest sample sizes. Unfortunately, the separate bootstrap has again really low power while it is questionable wheater the additional condition for pooled bootstrap is fulfilled. For the user, this condition is almost as hard to check as equality of covariance. This could lead to the almost circular situation where another test would be necessary to allow the pooled bootstrap approach for testing homogeneity of covariances.
The null hypothesis of equal total covariance resp. equal traces could be rejected significantly (at level 5%) by both bootstrap tests in three cases. Perhaps surprising at first is that the null hypothesis of equal covariance matrices between the female AD and MCI groups could not be rejected, but the joint univariate null hypothesis of equal traces could now be rejected at level .
Although the hypothesis of equal covariance matrices couldn’t be rejected in each case, it shows that sex and diagnosis are likely to have an effect on the covariance matrix. This illustrative analysis underpins that the approach of Bathke et al. (2018), which can deal with covariance heterogeneity, was very reasonable.
7 Conclusion & Outlook
In the present paper, we have introduced and evaluated a unified approach to testing a variety of rather general null hypotheses formulated in terms of covariance matrices.
The proposed method is valid under a comparatively small number of requirements which are verifiable in practice.
Previously existing procedures for the situation addressed here had suffered from low power to detect alternatives, were limited to only a few specific null hypotheses, or needed various requirements in particular regarding the data generating distribution.
Under weak conditions, we have proved the asymptotic normality of the difference between the vectorized covariance matrices and its corresponding vectorized empirical version. We considered two-test statistics which are based upon the vectorized empirical covariance matrix and an estimator of its own covariance: a Wald type statistic (WTS) as well as an ANOVA type statistic (ATS) These exhibit the usual advantages and disadvantages that are already well-known from the literature on mean-based inference. In order to take care of some of these difficulties, namely the critical value for the ATS being unknown and the WTS requiring a rather large sample size, two kinds of bootstrap were used. On this occasion, specific adaptions were needed to take account of the special situation where inference is not on the expectation vectors, but on the covariance matrices.
To investigate the properties of the newly constructed tests, an extensive simulation study was done. For this purpose, several different hypotheses were considered and the type one error control, as well as the power to detect deviations from the null hypothesis, were compared to existing test procedures. The ATS showed a quite accurate error control in each of the hypotheses, in particular in comparison with competing procedures. Note that for most hypotheses, no appropriate competing test is available. The simulated power of the proposed tests was fine, even for moderately small sample sizes (). This is a major advantage when comparing with existing procedures for testing homogeneity of covariances, even considering that they usually require further assumptions .
In future research, we would like to investigate in more detail the large number of possible null hypotheses which are included in our model as special cases. For example, tests for given covariance structure (such as compound symmetry or autoregressive) with unknown parameters are of great interest. Moreover, our results allow for a variety of new tests for hypotheses that can derived from our model, for example testing the equality of determinants of covariances matrices.
Paavo Sattler and Markus Pauly like to thank the German Research Foundation for the support received within project PA 2409/4-1. Moreover, Arne Bathke was supported by Austrian Science Fund (FWF) I 2697-N31.
The asymptotic distribution, discussed in Theorem 3.1 is well known, but based on the importance for the techniques present in this paper we will prove it shortly. Moreover this allows to get the idea of our bootstrap approaches later on.
First we consider the difference between the vector and his estimated version , multiplied with
Due to Slutzky and the multivariat Central limit theorem the second and third term tend to zero in probability. Thus, it is sufficient to consider the first term. But this converges toin distribution again by the multivariate central limit theorem, which gives us the result by central mapping theorem. ∎
This convergence would also follow from Zhang and Boos (1993) but the bootstrap approach is based on this proof, so it is helpful to outline it again. To use this result, a consistent estimator for the covariance matrix is needed.
We know that
is a consistent estimator for , since are i.i.d. vectors. Thus, it is sufficient to prove that converge almost sure to 0. This leads to