Regression problems with a predictor of spherical nature arise in various fields such as geology, crystallography, astronomy (cosmic microwave background radiation data), the calibration of electromagnetic motion-racking systems or the representation of spherical viruses [see Chapman et al. (1995), Zheng et al. (1995), Chang et al. (2000), Schaeben and van den Boogaart (2003), Genovese et al. (2004), Shin et al. (2007) among many others] and their parametric and nonparametric estimation has found considerable attention in the literature.
Several methods for estimating a spherical regression function nonparameterically have been proposed in the literature. Di Marzio et al. (2009, 2014) investigate kernel type methods, while spherical splines have been considered by Wahba (1981) and Alfed et al. (1996). A frequently used technique is that of series estimators based on spherical harmonics [see Abrial et al. (2008) for example], which - roughly speaking - generalise estimators of a regression function on the line based on Fourier series to data on the sphere. Alternative series estimators have been proposed by Narcowich et al. (2006), Baldi et al. (2009) and Monnier (2011) who suggest to use spherical wavelets (needlets) in situations where better localisation properties are required. Most authors consider the -dimensional sphere in as they are interested in the development of statistical methodology for concrete applications such as earth and planetary sciences.
On the other hand, regression models with spherical predictors with a dimension larger than three have also found considerable attention in the literature, mainly in physics, chemistry and material sciences. Here predictors on the unit sphere
with and series expansions in terms of the so called hyperspherical harmonics are considered. These functions form an orthonormal system with respect to the uniform distribution on the sphere
and have been, for example, widely used to solve the Schroedinger equation by reducing the problem to a system of coupled ordinary differential equations in a single variable [see for exampleAvery and Wen (1982) or Krivec (1998) among many others]. Further applications in this field can be found in Meremianin (2009), who proposed the use of hyperspherical harmonics for the representation of the wave function of the hydrogen atom in the momentum space. Similarly, Lombardi et al. (2016) suggested to represent the potential energy surfaces (PES) of atom-molecule or molecular dimers interactions in terms of a series of four-dimensional hyperspherical harmonics. Their method consists in fitting a certain number of points of the PES, previously determined, selected on the basis of geometrical and physical characteristics of the system. The resulting potential energy function is suitable to serve as a PES for molecular dynamics simulations. Hosseinbor et al. (2013) applied four-dimensional hyperspherical harmonics in medical imaging and estimated the coefficients in the corresponding series expansion via least squares methods to analyse brain subcortical structures. A further important application of series expansions appears in material sciences, where crystallographic textures as quaternion distributions are represented by means of series expansions based on (symmetrized) hyperspherical harmonics [see Bunge (1993), Zheng et al. (1995), Mason and Schuh (2008) and Mason (2009) among many others].
It is well known that a carefully designed experiment can improve the statistical inference in regression analysis substantially, and numerous authors have considered the problem of constructing optimal designs for various regression models [see, for example, the monographs ofFedorov (1972), Silvey (1980) and Pukelsheim (2006)]. On the other hand, despite of its importance, the problem of constructing optimal or efficient designs for least squares (or alternative) estimation of the coefficients in series expansions based on hyperspherical harmonics has not found much interest in the statistical literature, in particular if the dimension is large. The case corresponding to Fourier regression models has been discussed intensively [see Karlin and Studden (1966), page 347, Lau and Studden (1985), Kitsos et al. (1988) and Dette and Melas (2003) among many others]. Furthermore, optimal designs for series estimators in terms of spherical harmonics (that is, for ) have been determined by Dette et al. (2005) and Dette and Wiens (2009), however, to the best of our knowledge no results are available for hyperspherical harmonics if the dimension of the predictor is larger than .
In the present paper we consider optimal design problems
for regression models with a spherical predictor of dimension
and explicitly determine optimal designs for series estimators in hyperspherical harmonic expansions.
In Section 2 we introduce some basic facts about optimal design theory and hyperspherical harmonics, which will be required for the results presented in this paper. Analytic solutions of the optimal design problem are given in Section 3.1, where we determine
optimal designs with respect to all Kiefer’s -criteria [see Kiefer (1974)]
as well as with respect to a class of optimality criteria recently introduced by Harman (2004). As it turns out the approximate optimal designs are absolute continuous distributions on the sphere and thus cannot be directly implemented in practice. Therefore, in Section 3.2 we provide discrete designs with the same information matrices as the continuous optimal designs. To achieve this we construct new Gaussian quadrature formulas for integration on the sphere, which are of own interest. In Section 4 we investigate the performance of the optimal designs determined in Section 3.2 when they are used in
typical applications in material sciences. Here energy functions are represented in terms
of series of symmetrized hyperspherical harmonics which are obtained as well as defined as linear combinations of the hyperspherical harmonics such that the symmetry of a crystallographic point group is reflected in the energy function.
It is demonstrated that the derived designs have very good efficiencies (for the first crystallographic point group the design is in fact
-optimal). Finally, a proof of a technical result can be found in Appendix A.
The results obtained in this paper provide a first step towards the solution of optimal design problems for regression models with spherical predictors if the dimension is and offer a deeper understanding of the general mathematical structure of hyperspherical harmonics, which so far were only considered in the cases and .
2 Optimal designs and hyperspherical harmonics
2.1 Optimal design theory
We consider the linear regression model
is a vector of linearly independent regression functions,is the vector of unknown parameters, denotes a real-valued covariate which varies in a compact design space, say (which will be
in later sections), and different observations are assumed to be independent with the same variance, say. Following Kiefer (1974)
we define an approximate design as a probability measureon the set (more precisely on its Borel field). If the design has finite support with masses at the points and observations can be made by the experimenter, this means that the quantities are rounded to integers, say , satisfying , and the experimenter takes observations at each location . The information matrix of the least squares estimator is defined by
[see Pukelsheim (2006)] and measures the quality of the design as the matrix can be considered as an approximation of the covariance matrix of the least squares estimator in the corresponding linear model . Similarly, if the main interest is the estimation of linear combinations , where is a given matrix of rank , the covariance matrix of the least squares estimator for these linear combinations is given by , where denotes the generalized inverse of the matrix and it is assumed that . The corresponding analogue of its inverse for an approximate design satisfying the range inclusion is given by (up to the constant )
It follows from Pukelsheim (2006), Section 8.3, that for each design there always exists a design with at most support points such that . An optimal design maximises an appropriate functional of the matrix and numerous criteria have been proposed in the literature to discriminate between competing designs [see Pukelsheim (2006)]. Throughout this paper we consider Kiefer’s -criteria, which are defined for as
Following Kiefer (1974), a design is called -optimal for estimating the linear combinations if
maximises the expression among all approximate designs
for which is estimable, that is, . This family of optimality criteria includes the well-known criteria of -, - and -optimality corresponding to the cases , and , respectively.
Moreover, we consider a generalised version of the criterion of -optimality introduced by Harman (2004) [see also Filová et al. (2011)]. For the information matrix let
be the vector of the eigenvalues ofin nondecreasing order. Then, for , we define by the sum of the -th smallest eigenvalues of , that is,
For a fixed we call a design -optimal if it maximises the term among all approximate designs .
In general, the determination of -optimal designs and of -optimal designs in an explicit form is a very difficult task and the corresponding optimal design problems have only been solved in rare circumstances [see for example Cheng (1987), Dette and Studden (1993), Pukelsheim (2006), p.241, and Harman (2004)]. In the following discussion we will explicitly determine -optimal designs
for regression models which arise from a series expansion of a function on the -dimensional sphere in terms of hyperspherical harmonics. It turns out that the -optimal designs are also -optimal for an appropriate choice of .
We introduce the hyperspherical harmonics next.
2.2 Hyperspherical harmonics
Assume that the design space is given by the -dimensional sphere . The hyperspherical harmonics are functions of dimensionless variables, namely the hyperangles, which describe the points on the hypersphere by the equations
where for all , [see, for example, Andrews et al. (1991) or Meremianin (2009)]. As noted by Dokmanić and Petrinović (2010), this choice of coordinates is not unique but rather a matter of convenience since it is a natural generalisation of the spherical polar coordinates in .
In the literature, hyperspherical harmonics are given explicitly in a complex form (see, for example, Vilenkin (1968) and Avery and Wen (1982)). Following the notation in Avery and Wen (1982), they are defined as
where , for , and
is a normalising constant, are a set of integers and the functions
are the Gegenbauer polynomials (of degree with parameter ), which are orthogonal with respect to the measure
(here denotes the indicator function of the set ). The complex hyperspherical functions are orthogonal to their corresponding complex conjugate and form an orthonormal basis of the space of square integrable functions with respect to the uniform distribution on the sphere
In fact the constants are chosen based on this property [see, for example, Avery and Wen (1982) for more details].
However, as mentioned in Mason and Schuh (2008)
, expansions of real-valued functions on the sphere are easier to handle in terms of real hyperspherical harmonics which are obtained from the complex hyperspherical harmonics via the linear transformations
and is the associated Legendre polynomial which can be expressed in terms of a Gegenbauer polynomial via
It is easy to check that in the case of , the expressions in (2.7), (2.8) and (2.9) give the well known spherical harmonics involving only the associated Legendre polynomial [see Chapter 9 in Andrews et al. (1991) for more details].
The real hyperspherical harmonics defined in (2.7), (2.8) and (2.9) preserve the orthogonality properties of complex hyperspherical harmonics proven in Avery and Wen (1982). In other words, the real hyperspherical harmonics form an orthonormal basis of the Hilbert space
is the element of solid angle.
We now consider the linear regression model (2.1), where the vector of regression functions is obtained by a truncated expansion of a function of order, say, in terms of hyperspherical harmonics, that is,
Consequently, we obtain form (2.1) (using the coordinates ,
is the vector of hyperspherical harmonics of order and the vector of parameters is given by
Note that the dimension of the vectors and is
where the expression for the sums over the ’s () is obtained from Avery and Wen (1982).
3 - and - optimal designs for hyperspherical harmonics
3.1 Optimal designs with a Lebesgue density
In this section we determine -optimal designs for estimating the parameters in a series expansion of a function defined on the unit sphere . The corresponding regression model is defined by (2.11) and as mentioned in Section 2.1 a -optimal (approximate) design maximises the criterion (2.4) in the class of all probability measures on the set satisfying the range inclusion , where the information matrix is given by
We are interested in finding a design that is efficient for the estimation of the Fourier coefficients corresponding to the hyperspherical harmonics
and denotes a given level of resolution. To relate this to the definition of the -optimality criteria, let , and be the matrix with all entries equal to . Define the matrix
denotes the identity and is an matrix with all entries equal to . Note that where is defined in (2.12), and that defines a vector with
components, that is
(). The following theorem shows that the uniform distribution on the hypersphere is - and -optimal for estimating the parameters (for any ) .
where is a normalising constant given by
and the double factorial for is defined by
We note that the explicit expression for the normalising constant in (3.7) is given in equation (30) in Wen and Avery (1985). Let denote the design corresponding to the density defined by (3.1) and (3.7). Then due to the orthonormality property of the real hyperspherical harmonics, given in equation (2.10), it follows that
where is defined in equation (3.7). This proves part (i) of the Theorem.
For a proof of (ii) let . According to the general equivalence theorem in Pukelsheim (2006), Section 7.20, the measure is -optimal if and only if the inequality
holds for all and .
Now the right-hand side can be simplified observing the sum rule for real hyperspherical harmonics, that is
where the constant is given by
where the last equality follows from the definition of in (3.4).
Consequently, the right-hand side and left-hand side of (3.10) coincide, which proves that the design corresponding to the density defined by (3.1) and (3.7) is -optimal for any and any matrix of the form (3.2) and (3.3).
The remaining case follows from Lemma 8.15 in Pukelsheim (2006), which completes the proof of part (ii).
For a proof of part (iii) let denote a diagonal matrix with entries and let
denote the subgradient of . Then it follows from Theorem 4 of Harman (2004), that the design is -optimal if and only if there exists a matrix such that the inequality
that is, the matrix is contained in the subgradient . Using this matrix in (3.14) the left-hand side of the inequality reduces to
and part (i) yields for the right hand side of the inequality
3.2 Discrete - and - optimal designs
While the result of the previous section provides a very elegant solution to the -optimal design problem from a mathematical point of view, the derived designs cannot be directly implemented as the optimal probability measure is absolute continuous. In practice, if observations are available to estimate the parameters in the linear regression model (2.11), one has to specify a number, say , of different points defining by (2.6) the locations on the sphere where observations should be taken, and relative frequencies defining the proportion of observations taken at each point ). The maximisation of the function (2.4) in the class of all measures of this type yields a non-linear and non-convex discrete optimisation problem, which is usually intractable.
Therefore, for the construction of optimal or (at least) efficient designs we proceed as follows. Due to Caratheodory’s theorem [see, for example, Silvey (1980)] there always exists a probability measure on the set with at most support points such that the information matrices of and coincide, that is,
We now identify such a design assigning at the points the weights such that the identity (3.15) is satisfied, where we simultaneously try to keep the number of support points “small”. The numbers specifying the numbers of repetitions at the different experimental conditions in the concrete experiment are finally obtained by rounding the numbers to integers [see, for example, Pukelsheim and Rieder (1992)]. We begin with an auxiliary result about Gauss quadrature which is of independent interest and is proven in the appendix.
Let be a positive and integrable weight function on the interval with and let denote points with corresponding positive weights (). Then the points and weights generate a quadrature formula of degree , that is
if and only if the following two conditions are satisfied:
The polynomial is orthogonal with respect to the weight function to all polynomials of degree , that is,
The weights are given by
where denotes the
th Lagrange interpolation polynomial with nodes.
In the following, we use Lemma 3.1 for and the weight function
Note that the Gegenbauer polynomials are orthogonal with respect to the weight function on the interval [see Andrews et al. (1991), p. 302]. Hence the roots of have multiplicity , are real and located in the interval . As condition (3.17) is satisfied for , they define together with the corresponding (positive) weights in (3.18) a Gaussian quadrature formula. Therefore, it follows that for any there exists at least one quadrature formula for every , such that (3.16) holds with . We consider quadrature formulas of this type and define the designs