# Bivariate Densities in Bayes Spaces: Orthogonal Decomposition and Spline Representation

A new orthogonal decomposition for bivariate probability densities embedded in Bayes Hilbert spaces is derived. It allows one to represent a density into independent and interactive parts, the former being built as the product of revised definitions of marginal densities and the latter capturing the dependence between the two random variables being studied. The developed framework opens new perspectives for dependence modelling (which is commonly performed through copulas), and allows for the analysis of dataset of bivariate densities, in a Functional Data Analysis perspective. A spline representation for bivariate densities is also proposed, providing a computational cornerstone for the developed theory.

## Authors

• 3 publications
• 2 publications
• 5 publications
• ### Cointegrated Density-Valued Linear Processes

In data rich environments we may sometimes deal with time series that ar...
10/21/2017 ∙ by Won-Ki Seo, et al. ∙ 0

• ### Simplicial splines for representation of density functions

In the context of functional data analysis, probability density function...
05/16/2019 ∙ by Jitka Machalova, et al. ∙ 0

• ### Nonparametric estimation of copulas and copula densities by orthogonal projections

In this paper we study nonparametric estimators of copulas and copula de...
10/29/2020 ∙ by Yves I. Ngounou Bakam, et al. ∙ 0

• ### Changing reference measure in Bayes spaces with applications to functional data analysis

Probability density functions (PDFs) can be understood as continuous com...
12/17/2019 ∙ by R. Talska, et al. ∙ 0

• ### Nonparametric forecasting of multivariate probability density functions

The study of dependence between random variables is the core of theoreti...
03/19/2018 ∙ by Dominque Guégan, et al. ∙ 0

• ### Kernel density decomposition with an application to the social cost of carbon

A kernel density is an aggregate of kernel functions, which are itself d...
03/20/2020 ∙ by Richard S. J. Tol, et al. ∙ 0

• ### Splinets – splines through the Taylor expansion, their support sets and orthogonal bases

A new representation of splines that targets efficiency in the analysis ...
02/01/2021 ∙ by Krzysztof PodgÓrski, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The analysis of distributional data is gaining an increasing interest in the applied sciences. Distributional data, such as probability density functions (PDFs) or cumulative distribution functions, are routinely collected in social sciences (e.g., population pyramids

[4, 13] and geosciences (e.g., particle-size distributions [1, 22]). Analyses of distributional data based on methods designed for functional data in often lead to inappropriate results, such as negative predictions [23, 37].

It is now widely recognized that an appropriate statistical analysis of PDF data should be precisely based on their characterizing properties (e.g., [25, 28, 13, 1]). In the literature, several approaches have been proposed to serve the purpose of analysing datasets of PDFs. Most works propose to analyse PDF data through a prior data transformation. For instance, [4]

considers a transformation approach to the principal component analysis of a dataset of PDFs.

[35] use a square root transformations of densities to deal with a time-warping function in registration. [28] propose a set of transformations that map the PDF data to a Hilbert space, where further statistical analyses are possible; this setting allows for, e.g., principal component analysis, classification, regression. A relatively large body of recent literature proposes the use of the Wasserstein metric to define a notion of distance for density data. Such metric has appealing interpretations, being related to the problem of optimal transport. However, it defines a non-linear space (Riemannian manifold), thus requiring the development of ad hoc methods for this setting (e.g., [2, 26, 29]), based on Frechét statistics. A different approach is that relying on the theory of Bayes linear spaces, that represent a generalization to the infinite-dimensional setting of the Compositional Data Analysis (CoDa, [27]) approach. In this setting, PDFs are considered as infinite-dimensional objects that provide relative information [39, 40]. Bayes Hilbert spaces were built as to represent the so-called principles of CoDa (i.e., scale invariance, relative scale, sub-compositional coherence, see [27]), through a Hilbert geometry for PDFs. The Hilbert structure of the space allows one to develop most methods of functional data analysis, while accounting for the peculiar nature of PDFs. These include principal component analysis [13], functional regression [37], spatial prediction [1], profile monitoring [23], time-series analysis [14, 33]. Even though the statistical literature is nowadays well-developed for distributional data, little attention has been paid so far to the setting of multivariate densities, whose study is of paramount importance in the applications. A first contribution in direction of bivariate densities was recently provided in the preprint by [12], which uses the theory of Bayes Hilbert spaces over bivariate domains to study the temporal dynamic of coupled time series, modelled through copulas. As a key element of innovation with respect to previous literature, the present work proposes a novel statistical framework for bivariate PDFs, that allows studying the dependence between the target random variables, grounding on the geometry of the Bayes space. We provide new meaningful notions of compositional marginals (so-called geometric marginals), which play the roles of the marginal distributions, consistent with the Bayes geometry. We further derive an orthogonal decomposition of bivariate PDFs in terms of independence and interaction parts, generalizing the well-known results developed in the discrete case (i.e., for compositional tables, [6, 7]). To allow for explicit computations of the marginals and of the latter representation, we develop a novel -spline representation for bivariate PDFs, compatible with the compositional nature of the data.

The methodological results of our work shed light on the structure of multivariate Bayes spaces, suggesting a direction to create connections between the theory of Bayes spaces and the theory of copulas [24]

, which are widely used to build multivariate PDFs from marginals. Note that the theory of copulas is well-established and allows one to describe the joint distribution function of two random objects under very general assumptions. Our work is mostly focused on density functions (PDFs) instead, entailing a difference in the approaches in terms of (i) the assumptions made on the distribution at hand and (ii) the theoretical properties of the object being studied. In this sense, this work presents the initial steps in the direction of a new framework for dependence modeling, for which the extension to more general distributions (e.g., not absolutely continuous) is foreseen. On the other hand, this work is primarily aimed to build an analytical framework for datasets made of multivariate distributional objects, within the context of Functional Data Analysis (FDA,

[31]). In this view, building the dependence modelling on PDFs might be preferable, and this would be consistent with the usual practice in FDA, where regularity assumptions (continuity, boundedness, squared-integrability) are typically made on data. In this context, the appealing properties of the Bayes space approach which are discussed in this work (resulting, e.g., from the orthogonal decomposition of the bivariate density into independent and interactive parts), are seen as key factors potentially fostering the development and interpretation of new FDA methods for multivariate distributional observations. In this sense, the methodology presented in the paper is going to offer an alternative viewpoint to the standard copula theory by providing an orthogonal decomposition of bivariate PDFs, while opening a novel frontier to analyse samples of bivariate densities using methods of FDA.

The remaining part of this work is organized as follows. In Section 2, the Bayes space methodology is recalled from [40], with particular reference to bivariate densities. This enables us to develop an orthogonal decomposition of bivariate densities into independent and interactive parts, thoroughly discussed in Section 3 and demonstrated with simulated truncated Gaussian densities in Section 4. In Section 5, a spline representation for bivariate densities mapped in the space is introduced; such representation is relevant to allow processing raw data, and to develop efficient computational methods. In Section 6 the theoretical framework is applied to a time series of bivariate densities coming from an anthropometric cross-sectional study. The final Section 7 concludes with some overview comments and further perspective.

## 2 PDFs as elements of a Bayes space

Bayes spaces are designed to provide a geometrical representation for density functions characterized by the property of scale invariance [40]. The latter property assumes that, given a domain and a positive real multiple , two proportional positive functions and (i.e., such that , for ) carry essentially the same, relative information [40]

. This follows also the common strategy used in Bayesian statistics where multiplying factors are typically dropped from computations, as these are not essential to the definition of the distributions at hand. Note that the scale invariance of a density

is a direct consequence of the same property of the associated measure , i.e., of the -finite measure such that for a reference measure . In this context, we refer to the so-called -equivalence of measures (and densities): two measures and are -equivalent if they are proportional, i.e., there exists a positive real multiple such that for any , being a sigma-algebra on .

Given a -finite measure , the Bayes space is a space of -equivalence classes of -finite positive measures with square-integrable log-density w.r.t. , i.e.,

From the practical point of view, an important role is played by the reference measure , as thoroughly investigated in [36]. The choice of the reference measure determines a weighting of the domain of the PDF, which can be used to give more relevance to certain regions of when conducting FDA, according to the purpose of the analysis [40, 8]. Given that the weighting of the domain is not of primary interest here and one would intuitively resort simply to the Lebesgue reference measure, the discussion on might seem somehow lateral to the main focus of this work. Nevertheless, as we will see already in Theorem 3.2, the scale of indeed matters for a meaningful decomposition of a bivariate density into independent and interactive parts. For this reason, we here limit to mention two key points which shall be useful in the following. First, in general, an analysis based on a reference measure does not provide the same results as an analysis based on , for . Indeed, using or typically leads to a difference in the scale of the result. Second, to change the reference measure from to a measure with strictly positive -density

, the well-known chain rule can be used. For a generic measure

one has

The Bayes space, as described above, can also be defined for the case when the domain is a Cartesian product of two domains and , i.e., . In this case, the reference measure can be decomposed as a product measure and the Hilbert space structure of the Bayes space [40, 8] can be built accordingly. In this case, the operations of perturbation and powering can be defined for any two bivariate densities with respect to , i.e., , and a real constant as

 (f⊕g)(x,y)=B2(P)f(x,y)⋅g(x,y)% and(α⊙f)(x,y)=B2(P)f(x,y)α,

respectively. The lower index in means that the right hand side of the equations can be arbitrarily rescaled without altering the relative information that the resulting density in contains. The Hilbert space structure is completed by defining the inner product,

 ⟨f,g⟩B2(P)=12P(Ω)∬Ω∬Ωlnf(x,y)f(s,t)lng(x,y)g(s,t)dP(x,y)dP(s,t)==12P(Ω)∫ΩX∫ΩY∫ΩX∫ΩYlnf(x,y)f(s,t)lng(x,y)g(s,t)dPX(x)dPY(y)dPX(s)dPY(t), (1)

which implies in the usual way also the norm and the distance,

 ||f||B2(P)=√⟨f,f⟩B2(P),dB2(P)(f,g)=||f⊖g||B2(P), (2)

where is the perturbation-subtraction of densities. Here, the definition of the inner product (1) is presented according to [8]. While the scale of the reference measure does not have any impact for the operations of perturbation and powering, it does influence the inner product because the scale corresponds to shrinkage (or expansion) of the Bayes space (for details, see [36]).

The usual strategy when dealing with the Bayes spaces [40, 1, 13] is not to process densities directly in the original space but to map them into the standard space where most of the widely-used methods of functional data analysis (FDA, [31]) can be employed. The clr transformation of a bivariate density is a real function , , defined – using Fubini’s theorem – as

 fc(x,y) = lnf(x,y)−1P(Ω)∬Ωlnf(x,y)dP= = lnf(x,y)−1P(Ω)∫ΩX∫ΩYlnf(x,y)dPXdPY.

Similarly as for perturbation and powering, the scale of does not play any role in (2), too. On the other hand, one should note that the resulting function is expressed with respect to reference . As a consequence, using any measure other than the Lebesgue leads to clr-transformations defined over a weighted space [8]. Moreover, one should also take into account the zero-integral constraint of clr transformed densities, i.e.,

 ∫ΩX∫ΩYfc(x,y)dPXdPY=0. (4)

In the following, we shall indicate by the subspace of the space of (equivalence classes of) functions having zero integral; in particular, one clearly has that . Nevertheless, previous works focused on the univariate case demonstrate that this constraint usually does not represent any serious obstacle for the application of FDA methods, especially if a proper spline representation of the densities is used [13, 16, 37]. Since a reliable and flexible spline representation forms a cornerstone in a large number of computational methods for FDA [31], we shall pay special attention in developing a bivariate -splines basis suited to represent clr transformation of bivariate densities in Section 5.

## 3 Decomposition of bivariate densities

One of the key goals in probability theory is to study dependence structure between two random variables. A systematic approach to the analysis of dependence structure is represented by the theory of copulas

[24], firstly introduced by Sklar [34]. The well-known Sklar’s theorem provides a decomposition of any PDF into its interactive and independent parts, the latter being built as the product of the respective marginal PDFs. Relying on the Bayes space methodology allows one to provide a similar decomposition which is now, however, orthogonal. This important property enables for an elegant geometrical representation of the decomposition, and for a powerful probabilistic interpretation if a normalized reference measure is used, with direct consequences from the statistical viewpoint. For example, the proposed decomposition allows one to derive a measure of dependence called simplicial deviance, defined as the squared norm of the density expressing (solely) relationships between both variables (factors).

The orthogonal decomposition of bivariate densities grounds on a novel definition of marginals, named geometric marginals, which are built upon marginalizing the bivariate clr transformation as follows. Given and , we define the clr marginals as

 fcX(x)=∫ΩYfc(x,y)dPY=∫ΩYlnf(x,y)dPY−PX(ΩX)P(Ω)∫ΩX∫ΩYlnf(x,y)dPXdPY (5)

and

 fcY(y)=∫ΩXfc(x,y)dPX=∫ΩXlnf(x,y)dPX−PY(ΩY)P(Ω)∫ΩX∫ΩYlnf(x,y)dPXdPY, (6)

respectively. It is easily seen that and , where stands for the subspace of whose elements integrate to zero. We define the geometric marginals and as the elements of and associated with the clr-marginals , , respectively, i.e.,

 fX(x) =B(PX) exp{fcX(x)}=B(PX)exp{∫ΩYlnf(x,y)dPY}, (7) fY(y) =B(PY) exp{fcY(y)}=B(PY)exp{∫ΩXlnf(x,y)dPX}.

In the following, the terms marginal, -marginal and -marginal will always refer to the geometric notion of marginals given in (7).

In probability theory, independence of random variables corresponds to the possibility of expressing a joint density as a product of its marginals. In a setting where the latter are defined as the geometric marginals (7), the independent and interactive parts of can be defined, respectively, as

 find(x,y)=fX(x)fY(y), (x,y)∈Ω (8)

and

 (9)

where and are the geometrical marginals defined above. The first and foremost important property of the proposed decomposition

 f(x,y)=find(x,y)⊕fint(x,y) (10)

for a bivariate density is the orthogonality its parts. In the following, the geometrical marginals will be formally taken as bivariate functions, i.e. and , and considered as elements of ; similarly for their clr counterparts. This enables, among others, to express the independence density as sum (perturbation) of the geometric marginals, i.e.,

 find(x,y)=fX(x,y)⊕fY(x,y). (11)
###### Theorem 3.1.

For the independent and interactive parts of a bivariate density , it holds that
(i) , or, equivalently that
(ii) .

The proof of Theorem 3.1 – as well as those of the following theorems – is reported in Supplementary Material. Note that, from the orthogonality of the decomposition , the Pythagorean theorem follows directly, i.e., .

A further important property of independence densities is the following. Call arithmetic marginals the usual marginal distributions (a similar notation being used in the discrete case of compositional tables [7])

 fX,a(x)=∫ΩYf(x,y)dPY,fY,a(y)=∫ΩXf(x,y)dPX.

It is clear that, if the theory were built on arithmetic marginals, the above decompositions (10) and (11) together with the statement of Theorem 3.1 would not be achieved. On the other hand, there is an interesting link between the two types of marginals (geometric or arithmetic) when the independent part is concerned. Indeed, the following result states that, whenever the random variables are independent, the bivariate PDF coincides with its independent part defined in (8). In this case, if the reference measure is a probability measure (i.e., it is normalized), the arithmetic and the geometric marginals coincide.

###### Theorem 3.2.

Let be an independence density and let the reference measure be the product measure of probability measures . Then the arithmetic and geometric marginals of coincide.

As such, the independent part built through the geometric marginals enables one to fully capture the joint distribution of two random variables when these are independent.

The next theorem states the mutual orthogonality between the geometric marginals ( and ) and the interaction density .

###### Theorem 3.3.

The -marginal and -marginal are orthogonal with respect to the Bayes space , i.e., . Moreover, the marginals are also orthogonal to the interaction density, i.e., and .

The relations , and nicely illustrate that - and - marginals of the density represent its orthogonal projections. In addition, the Pythagorean theorem between the independence density and its projections holds, .

As a consequence of Theorems 3.2-3.3, one can conclude that, in case of independence, arithmetic and geometric marginals coincide, and the interaction part is null (i.e., it is the neutral element of perturbations). More in general, the next result states that the geometric marginals are completely determined by the independent part of the bivariate density. Here, the clr marginals of are defined as

 fcint,X = ∫ΩYlnfint(x,y)dPY−PY(ΩY)P(Ω)∬Ωfint(x,y)dP, fcint,Y = ∫ΩXlnfint(x,y)dPX−PX(ΩX)P(Ω)∬Ωfint(x,y)dP,

and the geometric marginals , are the associated densities in .

###### Theorem 3.4.

Whenever the reference measure is the product measure of probability measures , , the geometric marginals , of the interaction part coincide with the neutral element of perturbation, i.e., for any in one has

 f⊕fint,X=f;f⊕fint,Y=f. (12)

Theorem 3.4 motivates the name interaction density. Indeed, decomposition (10) applied to reads

 fint=0⊕⊕fint,

where is the neutral element of perturbation (with respect to probability reference measure ). Accordingly, the independent part of an interaction density is the null element . On the other hand, for an independent density, the interaction part is null. More in general, for any bivariate density , the nearest independence density is , and its distance from it is precisely . The squared norm can be thus taken as a proper measure of dependence. For consistency with the discrete case [7], we shall name it simplicial deviance, . Dividing the simplicial deviance by the squared norm of the bivariate density, one obtains a relative measure of dependence, hereafter named relative simplicial deviance,

 R2(f)=||fint||2B2(P)||f||2B2(P),0≤R2(f)≤1. (13)

Note that captures the amount of information contained in the interaction part with respect to the overall information within the density. If is small (), it means that most of the density is described by the independent part, and vice versa. A further advantage of the use of is its relative character: does not rely on the norm of the bivariate density which might be in practice influenced by the sample size of data being aggregated in the density.

Further, it can be proven that is marginal invariant, i.e., when the bivariate density is perturbed marginally (i.e., by marginal densities and ), the interaction part is not changed. This important property [42] is formulated in the next theorem.

###### Theorem 3.5.

Let be a probability measure, a bivariate density with the orthogonal decomposition and , marginal densities, in the sense that these latter are bivariate densities in , constant in one argument, i.e.,

 gX(x,y)=˜gX(x),gY(x,y)=˜gY(y),(x,y)∈Ω.

Then, the marginally perturbed density, , has the orthogonal decomposition , where and .

## 4 An example with a truncated Gaussian Density

For the sake of illustration, we present an example of application of the proposed framework to densities in the Gaussian family, where computations can be made explicitly. Given that, in general, one may not expect to be able to perform this type of computations explicitly, in Section 5 we develop a B-spline basis representation for bivariate distributions, from which the interactive and independent parts can be directly computed. We first consider a univariate Gaussian density, similarly as in [13, 4]. For the sake of simplicity, we set the reference measure to the Lebesgue measure, and consider a zero-mean Gaussian density, truncated over the interval . In this case, the density reads

 f(x)=B2(λ)exp{−x22σ2},x∈I.

The (univariate) clr-transformation of is defined as

 fc(x)=−x22σ2+T26σ2,x∈I.

Increasing the dimensionality of the sample space, we consider a zero-mean bivariate Gaussian density with respect to the (product) Lebesgue measure , truncated on a rectangular domain , with . In this case, the density is defined, for , as

 f(x,y)=B2(P)exp{x⊤Σ−1x}=exp{−12(1−ρ2)[x2σ21−2ρxyσ1σ2+y2σ22]},

with and being the correlation coefficient. In this setting, the clr transformation of is

 fc(x,y)=−12(1−ρ2)[x2σ21−2ρxyσ1σ2+y2σ22]+T26(1−ρ2)(1σ21+1σ22).

Marginalizing the clr transformation with respect to and yields the clr-marginals

 fcX(x) = −12(1−ρ2)⋅2Tx2σ21+T33(1−ρ2)⋅1σ21,x∈I1, fcY(y) = −12(1−ρ2)⋅2Ty2σ22+T33(1−ρ2)⋅1σ22,y∈I2.

On this basis, the geometric marginals are easily obtained – following (5) and (6) – as

 fX(x) =B2(P) exp{−12(1−ρ2)⋅2Tx2σ21},x∈I1, fY(y) =B2(P) exp{−12(1−ρ2)⋅2Ty2σ22},y∈I2.

Note that both marginals still belongs to a Gaussian family, with parameters and , .

Given the marginals, the independence and interactive parts are built as in (8) and (9), leading to

 find(x,y) =B2(P) exp{−2T2(1−ρ2)[x2σ21+y2σ22]}; fint(x,y) =B2(P) exp{−12(1−ρ2)[(1−2T)x2σ21−2ρxyσ1σ2+(1−2T)y2σ22]}.

The clr transformations of the latter parts are found as

 fcind(x,y)=−T1−ρ2(x2σ21+y2σ22)+T33(1−ρ2)(1σ21+1σ22);fcint(x,y)=−12(1−ρ2)((1−2T)x2σ21−2ρxyσ1σ2+(1−2T)y2σ22)++T2(1−2T)6(1−ρ2)(1σ21+1σ22).

Note that, in case of independence (),

 fcint(x,y)=32(3x2+T2σ21+3y2+T2σ22),

which is non-zero. This does not stand in contradiction with Theorem 3.2, since the previous computations are indeed referred to the Lebesgue measure, which is not a probability measure (it is not normalized). Analogous computations made in the case of a uniform measure (i.e., the product measure built upon uniform measures , ) lead to a null for . Indeed, in this case one has that the clr geometric marginals are defined as

 fcX(x) = −12(1−ρ2)⋅x2σ21+T26(1−ρ2)⋅1σ21,x∈I1, (14) fcY(y) = −12(1−ρ2)⋅y2σ22+T26(1−ρ2)⋅1σ22,y∈I2, (15)

leading to the following forms for the independent and interaction clr-densities

 fcind(x,y) = −12(1−ρ2)(x2σ21+y2σ22)+T26(1−ρ2)(1σ21+1σ22); fcint(x,y) = 1(1−ρ2)(ρxyσ1σ2).

It is then clear that the interaction part precisely captures the terms in depending on the mixed polynomial (i.e., the interaction between and ), and its magnitude is controlled by the magnitude of . In case of independence (), is null, and . Moreover, in this case, the geometric marginals and the arithmetic marginals coincide. Note that the former are found by normalizing the exponential of the first terms of and in (14)-(15). In the degenerate case of a perfect linear dependence between the marginal variables and (), is indeed degenerate as well. In fact, for not only is not defined, but does not belong to , nor to (the logarithms of the corresponding densities are not in nor in ).

Figure 1 reports the contour plots associated with the bivariate Gaussian density with , and , when the reference measure is the Lebesgue measure. Figure 2 reports the analogue contour plots when the quantities are computed w.r.t. a Uniform measure. For the sake of clarity, quantities referred to the Uniform reference are reported with a subscript in Figure 2. The figures clearly show that the scale of the reference measure plays indeed a role, particularly for the shape of (Figures 1c-f and 2c-f). This is in agreement with the conclusions of [36], where the effect of the reference measure on the geometry of (univariate) Bayes spaces is discussed. Given the statistical consequences of Theorems 3.2 and 3.4, the representation based on a normalized reference shall be here preferred. In the latter case (Figure 2), the independent part represents the (unique) distribution which would be built upon the geometric marginals – being -equivalent to a truncated , and the -equivalent to a truncated . The simplicial deviance is in this case . The value of the relative simplicial deviance represents the proportion of the norm of which can be attributed to the interaction part (i.e., to the deviation from independence). In this example, such proportion is 51%, indicating that the dependence between the two variables is indeed relevant in the definition of the bivariate distribution.

## 5 A spline representation for bivariate densities and their decompositions

Computational methods of FDA for the statistical analysis of datasets of bivariate densities are often based on basis representations for the data. In this section, we develop a spline representation for densities, which is based on a -spline approximation for clr transformed data. This will allow for the smoothing of bivariate splines, and the direct computations of geometric marginals, independence and interaction parts, as well as of the relative simplicial deviance. On one hand, this avoids the necessity of developing splines directly in ; on the other one, it implies that the zero integral constraint needs to be taken into account.

This goal is here achieved by using tensor product splines [3, 5, 32] which are an established tool in the field and, in principle, enable a generalization to dimensions. However, for the purpose of this paper and for ease of notation, we shall focus on bivariate splines only. We also avoid considering the general reference measure and focus on the Lebesgue measure (or its normalized counterpart, the uniform measure), although the case of a generic -reference can be reformulated as well [36]. We shall base our developments on [19, 18] – the same representation being used for approximation of PDFs, e.g., in [16, 13, 37, 21]; this setting is recalled in the Supplementary Material. Hereafter in this section, we limit to present the key points and results of our construction, leaving the details and proofs to the Supplementary Material.

We consider two strictly increasing sequences of knots

 Δλ:={λi}g+1i=0,λ0=a<λ1<…<λg

and denote by

the vector space of tensor product splines on

of degree in and in , with knots in the -direction and in -direction. As usual in spline theory, to get a unique representation additional knots are considered, namely

 λ−k=⋯=λ−1=λ0 =a, b =λg+1=λg+2=⋯=λg+k+1, (18) μ−l=⋯=μ−1=μ0 =c, d =μh+1=μh+2=⋯=μh+l+1. (19)

The general goal here explored is that of smoothing the values at points , , using a tensor-product spline. The values will be the clr-transformation of a discrete representation of the bivariate densities (i.e., histogram data), as showcased in Section 6. For the strictly increasing sequences of knots (16) and (17), a parameter and arbitrary and , we aim to find a spline which minimizes the functional

 Juv(skl)=αn∑i=1m∑j=1[fij−skl(xi,yj)]2+(1−α)∬Ω[s(u,v)kl(x,y)]2dxdy, (20)

where the upper index stands for the derivative, specifically

 s(u,v)kl(x,y)=∂u∂xu∂v∂yvskl(x,y).

Clearly, the choice of the parameter and of the derivative orders affects the smoothness of the resulting spline. For the optimal choice of , the generalized cross-validation (GCV) criterion is used here, similarly as in [17].

Given that we aim to reconstruct clr-transformed PDFs, the zero-integral constraint needs to be incorporated into the tensor product splines. Accordingly, we here aim to find a spline , , which minimizes the functional (20) and satisfies the additional condition

 ∬Ωskl(x,y)dxdy=0. (21)

We thus generalize to tensor product splines the idea presented in [16] for one-dimensional splines. To state the solution of the problem, and the conditions for its well-posedness, we need to introduce additional notation, that follows.

We express the tensor spline appearing in (20) as

 skl(x,y)=g∑i=−kh∑j=−lbijBk+1i(x)Bl+1j(y), (22)

where , are (univariate) -splines defined on the sequence of knots or and are the coefficients of this spline. The tensor spline in (22) can be expressed in matrix notation as , where is a matrix of -spline coefficients , is the collocation matrix of the -splines , and is the collocation matrix of the -splines . This admits also a tensor product representation, as

 (23)

where and is the vectorized form of the matrix (columnwise).

Let , with , , , and

 Lxj=⎛⎜ ⎜⎝−11.⋱⋱−11⎞⎟ ⎟⎠∈Rg+k+1−j,g+k+2−j.

Similarly, let