Smoothing-based tests with directional random variables

Testing procedures for assessing specific parametric model forms, or for checking the plausibility of simplifying assumptions, play a central role in the mathematical treatment of the uncertain. No certain answers are obtained by testing methods, but at least the uncertainty of these answers is properly quantified. This is the case for tests designed on the two most general data generating mechanisms in practice: distribution/density and regression models. Testing proposals are usually formulated on the Euclidean space, but important challenges arise in non-Euclidean settings, such as when directional variables (i.e., random vectors on the hypersphere) are involved. This work reviews some of the smoothing-based testing procedures for density and regression models that comprise directional variables. The asymptotic distributions of the revised proposals are presented, jointly with some numerical illustrations justifying the need of employing resampling mechanisms for effective test calibration.



There are no comments yet.


page 1

page 2

page 3

page 4


Model-Free Tests for Series Correlation in Multivariate Linear Regression

Testing for series correlation among error terms is a basic problem in l...

Goodness-of-fit tests for parametric regression models with circular response

Testing procedures for assessing a parametric regression model with circ...

Testing for exponentiality for stationary associated random variables

In this paper, we consider the problem of testing for exponentiality aga...

Regressor: A C program for Combinatorial Regressions

In statistics, researchers use Regression models for data analysis and p...

Least absolute deviations uncertain regression with imprecise observations

Traditionally regression analysis answers questions about the relationsh...

A New Look at F-Tests

Directional inference for vector parameters based on higher order approx...

A New General Method to Generate Random Modal Formulae for Testing Decision Procedures

The recent emergence of heavily-optimized modal decision procedures has ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 On goodness-of-fit tests and smoothing

In the early years of the 20th century, K. Pearson and colleagues initiate the development of testing methods for assessing the goodness-of-fit of a certain parametric model. Pearson (1900) presents his celebrated test as a criterion to check if a given system of deviations from a theoretical distribution could be supposed to come from random sampling, but it is not until a couple of years later when Elderton (1902) coined the term goodness-of-fit of theory to observation. Also at the beginning of last century, Pearson (1916)

introduce the first ideas for goodness-of-fit tests in regression models. With no theoretical support from probability theory (which was developed almost at the same time, and therefore, its impact on statistics was noticed some years later), these works set the basis for the construction of testing procedures with the aim of assessing a certain parametric null hypothesis for density/distribution (see

Bickel and Rosenblatt (1973) and Durbin (1973), as two influential papers) and regression models (see González-Manteiga and Crujeiras (2013) for a complete review on goodness-of-fit tests in this setting).

This work focus on a certain class of tests that makes use of nonparametric (smooth) estimators of the target function, that is, the density or the regression functions. First, consider the problem of testing a certain parametric density model


with a parametric density family. From a smoothing-based perspective, a pilot estimator constructed from , a sample from the random variable (rv) , will be confronted with a parametric estimator by the use of a certain discrepancy measure. Bickel and Rosenblatt (1973)

consider the classical Kernel Density Estimator (KDE)

, with kernel and bandwidth , to be compared with a parametric estimator under the null through an

-distance. In general, test statistics for (

1) can be built as , being a discrepancy measure between both estimators.

The ideas of goodness-of-fit tests for density curves have been naturally extended in the nineties of the last century to regression models. Consider, as a reference, a regression model , where the goal is to test


in an omnibus way from a sample of . Here is the regression function of over , and is a random error such that . A pilot estimator can be constructed using nonparametric weights, such as the Nadaraya-Watson weights given by . Other possible weights, such as the ones from local linear estimation, -nearest neighbours, or splines, can be also considered. Using these kind of pilot estimators, tests statistics can be built (similarly to the density case) as . In the presence of directional random variables, and considering the previous smoothing ideas, similar tests can be developed.

2 Goodness-of-fit tests with directional data

The statistical analysis of directional data, this is, elements in the -sphere , is notably different from the analysis of linear (Euclidean) data. In particular, no canonical ordering exists in , which makes rank-based inference ill-defined. We refer to the book of Mardia and Jupp (2000) for a comprehensive treatment of statistical inference with directional data, and for a collection of applications. Some smooth estimators for density and regression in this context are briefly revised below. These estimators are used as pilots for the testing proposals introduced in the subsequent sections.

2.1 Smooth estimation of density and regression

Let denote a sample from the directional rv with density . Hall et al. (1987) and Bai et al. (1988)111Hall et al. (1987)’s (1.3) is equivalent to Bai et al. (1988)’s (1.3), but the latter employs a notation with a more direct connection with the usual KDE. introduce a KDE for directional data, which is defined as follows:


with being the kernel, the bandwidth parameter, and

with . denotes both the area of , , and the Lebesgue measure in . For the consistency of (3), it is required that when at a rate slower than .

A directional rv usually appears related to another linear or directional rv, being cylindrical and toroidal data the most common situations in practice. In these scenarios, the modelling approach can be focused on the estimation of the joint density or the regression function. From the first perspective, in order to estimate the density of a directional-linear rv in , García-Portugués et al. (2013) propose a KDE adapted to this setting:


where is a directional-linear product kernel, and are two bandwidth sequences such that, for the consistency of (4), and .

In a toroidal scenario, a directional–directional KDE for the density of a rv in can be derived adapting (4):


with , with and required for consistency.

Considering now a regression setting with scalar response and directional covariate, let be a sample from the regression model , where is the regression function of over , and is a random error such that . A nonparametric estimator for , following the local linear ideas (see Fan and Gijbels (1996)), can be constructed as follows. Consider a Taylor expansion in a vicinity of :


where , , and

is the identity matrix of dimension

. From the extension of to by , since , the central expression in (6) follows. This motivates the weighted least squares problem


where is Kronecker delta, used to control both the local constant () and local linear () fits. The estimate solving (7) provides a local linear estimator for :


where , , is the -th unit canonical vector, and is the matrix with the -th row given by (if , ). For the consistency of (8), and are required.

2.2 Density-based tests

Testing (1) allows to check whether there are significant evidences against assuming the density has a given parametric nature, , with parameter either specified (simple hypothesis) or unspecified (composite hypothesis). In the spirit of Fan (1994)’s test, Boente et al. (2014) propose the next test statistic for addressing (1):

where is the expectation of (3) under . This term is included in order to match the asymptotic biases of the nonparametric and parametric estimators.

The asymptotic distribution of is settled on Zhao and Wu (2001)

’s central limit theorem for the integrated squared error of (

3), . The result is given under three different rates for . The relevant one for is

, when the integrated variance dominates the integrated bias (not dominant under

), and is given next:

with (the functional denotes the integration of the squared argument on its domain of definition) and

Under certain regularity conditions on and (A1–A3 in Boente et al. (2014)), if under , then

Hence, asymptotically, the test rejects at level whenever . Under local Pitman alternatives of the kind ( gives ), where is such that , and if under , the test rejects if . Hence, the larger the -norm of , the larger the power.

With being a directional-linear density, testing (1) can be done using

where is the expected value of under . Under regularity assumptions for the density and kernels (A1, A2 and A5 in García-Portugués et al. (2015)), and under ( is such that ), the limit law of under is


where . and are the variance components associated to the smoothing and, for the Gaussian and von Mises kernels, their expressions are remarkably simple: and .

Estimator (4) allows also to check the independence between the rv’s and in an omnibus way, for arbitrary dimensions. This degree of generality contrasts with the available tests for assessing the independence between directional and linear variables, mostly focused on the circular case and on the examination of association coefficients (e.g. Mardia (1976), Liddell and Ord (1978), and Fisher and Lee (1981)). Independence can be tested à la Rosenblatt (1975) by considering the problem


where is the joint directional-linear density, and and are the marginals. To that aim, García-Portugués et al. (2014) propose the statistic

Under the same conditions on the density and kernels required for (9), and with the additional bandwidths’ bond , , the asymptotic distribution of under independence is


where . Note that (11) is similar to (9), plus two extra bias terms given by the marginal KDEs.

and can be modified to work with a directional-directional rv by using the KDE in (5). The statistics for (1) and (10) are now:

respectively. Under the directional-directional analogues of the assumptions required for (9) and (11), the asymptotic rejection rule of is and, under independence,

with .

2.3 Regression-based tests

The testing of (2) (i.e., the assessment of whether has a parametric structure , with either specified or unspecified) is rooted on the nonparametric estimator for introduced in (8). In a similar way to Härdle and Mammen (1993) in the linear setting, problem (2) may be approached with the test statistic

where is the smoothing of , included to reduce the asymptotic bias (Härdle and Mammen, 1993), and is an optional weight function. The inclusion of has the benefits of avoiding the presence of the density of in the asymptotic bias and variance, and of mitigating the effects of the squared difference in sparse areas of .

Under , , and certain regularity conditions (A1–A3 and A5 in García-Portugués et al. (2016)), the limit distribution of is

where , this is, under .

3 Convergence towards the asymptotic distribution

Unfortunately, the asymptotic distributions of the test statistics , are almost useless in practise. In addition to the unknown quantities present in the asymptotic distributions, the convergences toward the limits are slow and depend on the bandwidth sequences. This forces the consideration of resampling mechanisms for calibrating the distributions of the statistics under the null: parametric bootstraps in , , and (Boente et al., 2014; García-Portugués et al., 2015); a wild bootstrap for (García-Portugués et al., 2016); and a permutation approach for and (García-Portugués et al., 2014). The purpose of this section is to illustrate, as an example, the convergence to the asymptotic distribution of the statistics and via insightful numerical experiments.

Figure 1: Asymptotic and empirical distributions for the standardized statistic , for sample sizes (left) and (right).

First, for we considered a circular-linear framework (), with a von Mises density with mean and concentration for the circular variable, and a for the linear density. We also took von Mises and normal kernels. These choices gave ( stands for the modified Bessel function of first kind and order ), , , and . We simulated samples of size , , under independence, obtaining . We took as a compromise between fast convergence and avoiding numerical instabilities. Figure 1 shows several density estimates for the sample of standardized statistics, jointly with the -values of the Kolmogorov–Smirnov (K–S) test for , and of the Shapiro–Wilk (S–W) test for normality. Both tests are significant up to a very large sample size (close to data), which is apparent from the visual disagreement between the finite sample and asymptotic distributions for .

Second, for , the regression model is considered, with , and uniformly distributed on the circle. The composite hypothesis is , for unknown. is checked using the local constant estimator with von Mises kernel and . Figure 2 shows the QQ-plots computed from the sample , for the bandwidth sequences , , which were chosen in order to illustrate their impact in the convergence to the asymptotic distribution. Specifically, it can be seen that the effect of undersmoothing boosts the convergence since the bias is mitigated. Again, up to large sample sizes, the degree of disagreement between the finite sample and the asymptotic distributions is quite evident.

Figure 2:

QQ-plot comparing the sample quantiles of

with the ones of the asymptotic distribution, for (left) and (right).


The authors acknowledge the support of project MTM2016-76969-P from the Spanish State Research Agency (AEI), Spanish Ministry of Economy, Industry and Competitiveness, and European Regional Development Fund (ERDF). We also thank Eduardo Gil, Juan J. Gil, and María Angeles Gil for inviting us to contribute to this volume, in memory of Pedro.


  • Bai et al. (1988) Bai, Z. D., Rao, C. R., and Zhao, L. C. (1988). Kernel estimators of density function of directional data. J. Multivariate Anal., 27(1):24–39.
  • Bickel and Rosenblatt (1973) Bickel, P. J. and Rosenblatt, M. (1973). On some global measures of the deviations of density function estimates. Ann. Statist., 1(6):1071–1095.
  • Boente et al. (2014) Boente, G., Rodríguez, D., and González-Manteiga, W. (2014). Goodness-of-fit test for directional data. Scand. J. Stat., 41(1):259–275.
  • Durbin (1973) Durbin, J. (1973). Weak convergence of the sample distribution function when parameters are estimated. Ann. Statist., 1:279–290.
  • Elderton (1902) Elderton, W. P. (1902). Tables for testing the goodness of fit of theory to observation. Biometrika, 1(2):155–163.
  • Fan and Gijbels (1996) Fan, J. and Gijbels, I. (1996). Local polynomial modelling and its applications, volume 66 of Monographs on Statistics and Applied Probability. Chapman & Hall, London.
  • Fan (1994) Fan, Y. (1994). Testing the goodness of fit of a parametric density function by kernel method. Economet. Theor., 10(2):316–356.
  • Fisher and Lee (1981) Fisher, N. I. and Lee, A. J. (1981). Nonparametric measures of angular-linear association. Biometrika, 68(3):629–636.
  • García-Portugués et al. (2014) García-Portugués, E., Barros, A. M. G., Crujeiras, R. M., González-Manteiga, W., and Pereira, J. (2014). A test for directional-linear independence, with applications to wildfire orientation and size. Stoch. Environ. Res. Risk Assess., 28(5):1261–1275.
  • García-Portugués et al. (2013) García-Portugués, E., Crujeiras, R. M., and González-Manteiga, W. (2013). Kernel density estimation for directional-linear data. J. Multivariate Anal., 121:152–175.
  • García-Portugués et al. (2015) García-Portugués, E., Crujeiras, R. M., and González-Manteiga, W. (2015). Central limit theorems for directional and linear data with applications. Statist. Sinica, 25:1207–1229.
  • García-Portugués et al. (2016) García-Portugués, E., Van Keilegom, I., Crujeiras, R., and González-Manteiga, W. (2016). Testing parametric models in linear-directional regression. Scand. J. Statist., 43(4):1178–1191.
  • González-Manteiga and Crujeiras (2013) González-Manteiga, W. and Crujeiras, R. M. (2013). An updated review of goodness-of-fit tests for regression models. Test, 22(3):361–411.
  • Hall et al. (1987) Hall, P., Watson, G. S., and Cabrera, J. (1987). Kernel density estimation with spherical data. Biometrika, 74(4):751–762.
  • Härdle and Mammen (1993) Härdle, W. and Mammen, E. (1993). Comparing nonparametric versus parametric regression fits. Ann. Statist., 21(4):1926–1947.
  • Liddell and Ord (1978) Liddell, I. G. and Ord, J. K. (1978). Linear-circular correlation coefficients: some further results. Biometrika, 65(2):448–450.
  • Mardia (1976) Mardia, K. V. (1976). Linear-circular correlation coefficients and rhythmometry. Biometrika, 63(2):403–405.
  • Mardia and Jupp (2000) Mardia, K. V. and Jupp, P. E. (2000). Directional statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Chichester, second edition.
  • Pearson (1900) Pearson, K. (1900). On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philos. Mag. Series 5, 50(302):157–175.
  • Pearson (1916) Pearson, K. (1916). On the application of “goodness of fit” tables to test regression curves and theoretical curves used to describe observational or experimental data. Biometrika, 11(3):239–261.
  • Rosenblatt (1975) Rosenblatt, M. (1975). A quadratic measure of deviation of two-dimensional density estimates and a test of independence. Ann. Statist., 3(1):1–14.
  • Zhao and Wu (2001) Zhao, L. and Wu, C. (2001). Central limit theorem for integrated square error of kernel estimators of spherical density. Sci. China Ser. A, 44(4):474–483.