    Joint Normality Test Via Two-Dimensional Projection

Extensive literature exists on how to test for normality, especially for identically and independently distributed (i.i.d) processes. The case of dependent samples has also been addressed, but only for scalar random processes. For this reason, we have proposed a joint normality test for multivariate time-series, extending Mardia's Kurtosis test. In the continuity of this work, we provide here an original performance study of the latter test applied to two-dimensional projections. By leveraging copula, we conduct a comparative study between the bivariate tests and their scalar counterparts. This simulation study reveals that one-dimensional random projections lead to notably less powerful tests than two-dimensional ones.

Authors

09/17/2021

A Normality Test for Multivariate Dependent Samples

Most normality tests in the literature are performed for scalar and inde...
09/02/2021

Exchangeability and the Cramér-Wold theorem

We address the problem of testing exchangeability and sign-invariant exc...
12/22/2019

Testing the equality of multivariate means when p>n by combining the Hoteling and Simes tests

We propose a method of testing the shift between mean vectors of two mul...
03/29/2021

Joint Nonanticipative Rate Distortion Function for a Tuple of Random Processes with Individual Fidelity Criteria

The joint nonanticipative rate distortion function (NRDF) for a tuple of...
08/18/2020

Simultaneous Diagnostic Testing for Linear-Nonlinear Dependence in Time Series

Several goodness-of-fit tests have been proposed to detect linearity in ...
10/03/2020

Randomized tests for high-dimensional regression: A more efficient and powerful solution

We investigate the problem of testing the global null in the high-dimens...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The popularity of normality, being an underlying assumption to many forecasting and inference models, has led to the development of many procedures aiming at testing the hypothesis of Gaussianity; especially in the case of independent (univariate or multivariate) samples, see the surveys of  and .

Despite the practical importance of having statistically dependent variables, the majority of the tests are derived under the assumption that the latter are identically distributed and independent, see ,  and  to cite few of an extensive literature. There has been considerable efforts to test the goodness-of-fit of stationary colored processes, such as the Epps test 

based on the characteristic function, the Lobato Velasco’s (LV)



modification of the test statistic proposed by

, and a test statistic that uses 1-D random projections  to upgrade Epps and LV procedures.

The lack of testing procedures for dependent samples is exacerbated in the multivariate setting. Available tests are scarce, and a powerful test like the bi-spectrum proposed in  suffers from severe drawbacks in practice. For this reason, we have recently proposed a computationally efficient test for multivariate time-series , specifically in the bivariate case. Our work is at the crossroad between all these works: that is, those on multivariate procedures, i.e testing the joint normality and those derived for colored processes. The questions addressed in this communication are:

• In the same spirit as [11, 8] using one-dimensional projection, what can we say about the power of our normality test  when applied to two-dimensional projections?

• What is the impact of taking into account the statistical dependence among time samples?

Our main contributions may be summarized as follows:

• The use of a joint normality test applied to two-dimensional projections of -variate colored processes.

• Copula-based computer experiments confirm that testing two-dimensional random projections is far better than their scalar counterparts applied on one-dimensional projections. This observation is more noticeable for colored processes.

Organisation of the paper. We first formulate the normality test as a binary hypothesis test in Section 2. The test statistic  is defined in Sections 3 and 4

; its asymptotic mean and variance for different scenarios (multivariate) i.i.d and scalar or bivariate colored processes are stated. Sections

5 and 6 are dedicated to computer experiments.

2 Problem formulation

Let be a

-variate stochastic process. In this paper, the processes are considered zero-mean stationary with finite moments up to order

. Let be the co-variance function whose entries are . Also denote . It is also assumed that is strong-mixing so that the series converges absolutely. The problem is formulated as:

Problem P1: Given a sample of size of , , test

 H0:X is Gaussianversus¯H0 (1)

where variables are identically distributed, but not statistically independent.

This normality test belongs to the class of tests without alternative. In this framework, a single parameter defines the nominal level of the test:

 α=P(choose ¯H0|H0 is true) (2)

For

, a standard measure of the gap from normality is the estimated Kurtosis:

 ^b2=1N∑Nn=1x(n)4S2 (3)

Following Mardia’s definition, we assume the extension of this measure to multivariate processes.

2.1 Mardia’s Kurtosis

The multivariate counterpart of the empirical kurtosis takes the form of:

 Bp(N)=1NN∑n=1(x(n)TS−1x(n))2 (4)

with the covariance matrix. Usually, this quantity is unknown and should be estimated on observations.
Our final test statistic takes the form:

 ^Bp(N)=1NN∑n=1(x(n)T^S−1x(n))2 (5)

with

 ^S=1NN∑k=1x(k)x(k)T.
Theorem 2.1

 Let be i.i.d. of dimension

. Then under the null hypothesis

, is asymptotically normal, with mean and variance .

Thus, we can test normality by measuring the normalized gap under :

 z=(^Bp(N)−p(p+2))/√(8p(p+2)/N)∼N(0,1) (6)

with means distributed as, and

denotes the univariate standard normal distribution. We reject the null hypothesis

at a significance level if:

 2(1−Φ(z))<α

where

denotes the cumulative distribution function (cdf) of the standard normal distribution. A similar theorem without assuming independence among samples has been devised in

 for bivariate statistically dependent processes. For the rest of the paper, Mardia’s test statistic will be denoted to distinguish it from the tests assuming statistical dependence.

3 Mean and variance of ^B1 for a scalar colored process

In the case of scalar colored samples, the expressions of mean and variance of kurtosis are :

 E{^B1}=3−6N−12N2N−1∑τ=1(N−τ)S(τ)2S2+o(1N) (7)
 Var{^B1}=24N[1+2NN−1∑τ=1(N−τ)S(τ)4S4]+o(1N) (8)

The dependence between time samples is taken into account in the terms . Interestingly, if , the equations above reduce to the i.i.d case.

4 Mean and variance of ^B2 in the bivariate case

In the bivariate case, expressions become rapidly much more complicated but we can still write them explicitly , as reported below:

 E{^B2}=8−16N−4N2N−1∑τ=1(N−τ)Q1(τ)(S11S22−S212)2+o(1N) (9)
 Var{^B2}=64N+16N2N−1∑τ=1(N−τ)Q2(τ)(S11S22−S212)4+o(1N) (10)

In the above equations, two kinds of dependence appear: so-called spatial cross-variate dependence , and the dependence between time-samples, . Due to their length, expressions of and are not explicited here and can be found in .

5 Computer Experiments

Illustration on copula.

Our goal is to generate colored multivariate non-Gaussian time-series, whose marginals are Gaussian to make the problem more difficult. With this goal, we chose to use Archimedean copula for their ease to generate in dimension .

Definition 5.1

A -dimensional copula is called Archimedean if it allows the representation:

 Cρ(u)=ψ(ψ−1(u1)+⋯+ψ−1(up)),u∈[0,1]p (11)

for some Archimedean generator and its inverse : ).

The parameter of the copula is related to Kendall rank correlation coefficient. Thus, it controls the spatial dependence between variables. In order to introduce time dependency between samples, an AR filter is applied on each marginal before constructing copula (this preserves normality). This leads to the following algorithm:

Sampling an Archimedean copula.

1. Sample i.i.d ,

2. Correlate ’s using a first order auto-regressive filter:

 yi(n) = 0.8yi(n−1)+ηi(n)

Note that the first samples are dropped to alleviate start-up effects ().

3. Transform for , where

denotes the cdf of the Gaussian distribution. Note that

’s are uniform on .

4. Sample where denotes the inverse Laplace-Stieltjes transform of

5. Return (), where

6. Transform to obtain Gaussian standard marginals as the following:

 xi(n)=Φ(u′i(n)) (12)

The above algorithm is a slight modification to the one due to Mashall, Olkin (1988). In the remainder of this paper, we precisely use Gumbel () and Clayton () copula.

Low-dimensional projection.

We study the performance of the proposed test statistic on a low-dimensional (either or ) projection of the initial -variate data. For a given copula , we carried out the following simulations:

• Given one set of bivariate observations () of total length , they are projected

times onto the arbitrary vector

with coordinates ().

is sampled from a uniform distribution on

denoted . Fig.2 shows an illustrative example with two copulas.

• Given one set of trivariate observations of total length , the points are projected arbitrarily times onto the plane defined by two angles (the angle between the axis and the new plane) and (measured between the axis and the vector inside the plane). Fig. 2 gives two illustrative examples of this procedure. Figure 1: Projection of bivariate points onto the direction in red

For each projection, we measure the -values returned by the test statistic. Each -value is compared with the level of the test ; if inferior to it the test rejects the null hypothesis of normality. The empirical rejection rates, summarized in the following tables are computed as .

6 Main results

We report the empirical rejection rate of each test statistic mentioned in Tables 1, 2, 3 and 4 and comment on the results for each scenario mentioned in the top left cell of each table. The results are reported for two significance levels and .

Scalar projection

• and perform very poorly when used on arbitrary one-dimensional projections of the Gumbel copula. The test power does not surpass .

• For the Clayton copula, whose tails are asymmetric, the test has a better power than the Gumbel copula. Although this observation is less demonstrative, we keep those results to further compare them with the bivariate test statistic.

• Since we only use first-order auto-regressive filters, there is no substantial difference in the performance of compared to ; this comparison is not of interest to us, because the bias induced by using tests assuming independence on colored processes has already been observed and studied in the literature  and .
However, it is interesting to compare Tables 1 and 2; we see that the overall performance of the test statistics tends to decrease when marginals are time-correlated.

• Table 3 shows performances obtained with when applied to colored processes. Contrary to , performances do not decrease with time-correlation. Furthermore, the power of the 2-D test based on is not affected by a rotation in the plane (implemented by two scalar projections onto two orthogonal axes). This is illustrated by Table 3, which reports the results averaged over 5000 random rotations.

Bivariate projection

• One would expect the same problem of misdetections to occur when projecting trivariate observations sampled from Gumbel copula. Yet, in Table 4 we show that the joint normality, even on a low representation of the data, is able to detect the non-Gaussianity of the process.

7 Concluding remarks

This study demonstrates, on one hand, that testing the joint normality of a two-dimensional projection yields a noticeable increase in the power of the test to detect departure from joint normality, even in the most pathological scenario of the Gumbel copula. On the other hand, when data are additionally time-correlated, the overall power of scalar tests tends to decrease. By assuming both spatial and temporal dependence, our bivariate test stands out from other existing multivariate tests that assume independence.

Future studies will be carried out to validate the performance of this statistic on real higher dimensional data.

References

•  K. V. Mardia, “Tests of univariate and multivariate normality,” in Handbook of Statistics, Vol.1, P. R. Krishnaiah, Ed., pp. 279–320. North-Holland, 1980.
•  R. Henze, “Invariant tests for multivariate normality: a critical review,” Statistical papers, vol. 43, pp. 467–506, 2002.
•  S. S. Shapiro, M. B. Wilk, and H. J. Chen, “A comparative study of various tests for normality,” American Statistical Association Journal, vol. 63, pp. 1343–1372, Dec. 1968.
•  K. O. Bowman and L. R. Shenton, “Omnibus contours for departures from normality based on b1 and b2,” Biometrika, vol. 62, pp. 243–250, 1975.
•  K. V. Mardia,

“Measures of multivariate skewness and kurtosis with applications,”

Biometrika, vol. 57, pp. 519–530, 1970.
•  T. W. Epps, “Testing that a stationary time series is Gaussian,” The Annals of Statistics, vol. 15, no. 4, pp. 1683–1698, 1987.
•  I. N. Lobato and C. Velasco, “A simple test of normality for time series,” Econometric Theory, vol. 20, no. 4, pp. 671–689, 2004.
•  A. Nieto-Reyes, J. A. Cuesta-Albertos, and F. Gamboa, “A random-projection based test of gaussianity for stationary processes,” Computational Statistics & Data Analysis, vol. 75, pp. 124–141, 2014.
•  M. Hinich, “Testing for Gaussianity and linearity of a stationary time series,” Journal of Time Series Analysis, vol. 3, no. 3, pp. 169–176, 1982.
•  Sara ElBouch, Olivier Michel, and Pierre Comon, A Normality Test for Multivariate Dependent Samples, Sept. 2021, hal-03344745.
•  James Francis Malkovich and A Afifi, “On tests for multivariate normality,” Journal of the American statistical association, vol. 68, no. 341, pp. 176–179, 1973.
•  Marius Hofert, “Sampling archimedean copulas,” Computational Statistics & Data Analysis, vol. 52, no. 12, pp. 5163–5174, 2008.
•  T. Gasser, “Goodness-of-fit tests for correlated data,” Biometrika, vol. 62, no. 3, pp. 563–570, 1975.
•  D. S. Moore, “The effect of dependence on chi squared tests of fit,” The Annals of Statistics, vol. 10, no. 4, pp. 1163–1171, 1982.