# Intrinsic Riemannian Functional Data Analysis

In this work we develop a novel and foundational framework for analyzing general Riemannian functional data, in particular a new development of tensor Hilbert spaces along curves on a manifold. Such spaces enable us to derive Karhunen-Loeve expansion for Riemannian random processes. This framework also features an approach to compare objects from different tensor Hilbert spaces, which paves the way for asymptotic analysis in Riemannian functional data analysis. Built upon intrinsic geometric concepts such as vector field, Levi-Civita connection and parallel transport on Riemannian manifolds, the developed framework applies to not only Euclidean submanifolds but also manifolds without a natural ambient space. As applications of this framework, we develop intrinsic Riemannian functional principal component analysis (iRFPCA) and intrinsic Riemannian functional linear regression (iRFLR) that are distinct from their traditional and ambient counterparts. We also provide estimation procedures for iRFPCA and iRFLR, and investigate their asymptotic properties within the intrinsic geometry. Numerical performance is illustrated by simulated and real examples.

## Authors

• 17 publications
• 12 publications
09/16/2020

### Intrinsic Riemannian Functional Data Analysis for Sparse Longitudinal Observations

A novel framework is developed to intrinsically analyze sparsely observe...
10/10/2019

### Gromov-Wasserstein Averaging in a Riemannian Framework

We introduce a theoretical framework for performing statistical tasks—in...
12/17/2019

### Changing reference measure in Bayes spaces with applications to functional data analysis

Probability density functions (PDFs) can be understood as continuous com...
05/31/2021

### Intrinsic Wasserstein Correlation Analysis

We develop a framework of canonical correlation analysis for distributio...
12/15/2021

### On Generalization and Computation of Tukey's Depth: Part II

This paper studies how to generalize Tukey's depth to problems defined i...
05/14/2015

### General Riemannian SOM

Kohonen's Self-Organizing Maps (SOMs) have proven to be a successful dat...
05/04/2021

### Riemannian Geometry with differentiable ambient space and metric operator

We show Riemannian geometry could be studied by identifying the tangent ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Functional data analysis (FDA) advances substantially in the past two decades, as the rapid development of modern technology enables collecting more and more data continuously over time. There is rich literature spanning more than seventy years on this topic, including development on functional principal component analysis such as Rao (1958); Kleffe (1973); Dauxois, Pousse and Romain (1982); Silverman (1996); Yao, Müller and Wang (2005a); Hall and Hosseini-Nasab (2006); Zhang and Wang (2016), and advances on functional linear regression such as Yao, Müller and Wang (2005b); Hall and Horowitz (2007); Yuan and Cai (2010); Kong et al. (2016), among many others. For a thorough review of the topic, we refer readers to the review article Wang, Chiou and Müller (2016) and monographs Ramsay and Silverman (2005); Ferraty and Vieu (2006); Hsing and Eubank (2015); Kokoszka and Reimherr (2017) for comprehensive treatments on classic functional data analysis. Although traditionally functional data take values in a vector space, more data of nonlinear structure arise and should be properly handled in a nonlinear space. For instance, trajectories of bird migration are naturally regarded as curves on a sphere which is a nonlinear Riemannian manifold, rather than the three-dimensional vector space . Another example is the dynamics of brain functional connectivity. The functional connectivity at a time point is represented by a symmetric positive-definite matrix (SPD). Then the dynamics shall be modelled as a curve in the space of SPDs that is endowed with either the affine-invariant metric (Moakher, 2005) or the Log-Euclidean metric (Arsigny et al., 2007) to avoid the “swelling” effect (Arsigny et al., 2007). Both metrics turn SPD into a nonlinear Riemannian manifold. In this paper, we refer this type of functional data as Riemannian functional data, which are functions taking values on a Riemannian manifold and modelled by Riemannian random processes, i.e., we treat Riemannian trajectories as realizations of a Riemannian random process.

Analysis of Riemannian functional data is not only challenged by the infinite dimensionality and compactness of covariance operator from functional data, but also obstructed by the nonlinearity of the range of functions, since manifolds are generally not vector spaces and render many techniques relying on linear structure ineffective or inapplicable. For instance, if the sample mean curve is computed for bird migration trajectories as if they were sampled from the ambient space , this naïve sample mean in general does not fall on the sphere of earth. For manifolds of tree-structured data studied in Wang and Marron (2007), as they are naturally not Euclidean submanifolds which refer to Riemannian submanifolds of a Euclidean space in this paper, the naïve sample mean can not even be defined from ambient spaces, and thus a proper treatment of manifold structure is necessary. While the literature for Euclidean functional data is abundant, works involving nonlinear manifold structure are scarce. Chen and Müller (2012) and Lin and Yao (2017) respectively investigate representation and regression for functional data living in a low-dimensional nonlinear manifold that is embedded in an infinite-dimensional space, while Lila and Aston (2017) focuses principal component analysis on functional data whose domain is a two-dimensional manifold. None of these deal with functional data that take values on a nonlinear manifold, while Dai and Müller (2017) is the only endeavor in this direction for Euclidean submanifolds.

As functional principal component analysis (FPCA) is an essential tool for FDA, it is of importance and interest to develop this notion for Riemannian functional data. Since manifolds are in general not vector spaces, classic covariance functions/operators do not exist naturally for a Riemannian random process. A strategy that is often adopted, e.g., Shi et al. (2009) and Cornea et al. (2017), to overcome the lack of vectorial structure is to map data on the manifold into tangent spaces via Riemannian logarithm map defined in Section 2.2. As tangent spaces at different points are different vector spaces, in order to handle observations from different tangent spaces, some existing works assume a Euclidean ambient space for the manifold and identify tangent vectors as Euclidean vectors. This strategy is adopted by Dai and Müller (2017)

on Riemannian functional data such as compositional data modelled on the unit sphere for the first time. Specifically, they assume that functional data are sampled from a time-varying geodesic submanifold, where at a given time point, the functions take values on a geodesic submanifold of a common manifold. Such a common manifold is further assumed to be a Euclidean submanifold that allows to identify all tangent spaces as hyperplanes in a common Euclidean space (endowed with the usual Euclidean inner product). Then, with the aid of Riemannian logarithm map,

Dai and Müller (2017) are able to transform Riemannian functional data into Euclidean ones while accounting for the intrinsic curvature of the underlying manifold.

To avoid confusion, we distinguish two different perspectives to deal with Riemannian manifolds. One is to work with the manifold under consideration without assuming an ambient space surrounding it or an isometric embedding into a Euclidean space. This perspective is regarded as completely intrinsic, or simply intrinsic. Although generally difficult to work with, it can fully respect all geometric structure of the manifold. The other one, referred to as ambient here, assumes that the manifold under consideration is isometrically embedded in a Euclidean ambient space, so that geometric objects such as tangent vectors can be processed within the ambient space. For example, from this point of view, the local polynomial regression for SPD proposed by Yuan et al. (2012) is intrinsic, while the aforementioned work by Dai and Müller (2017) takes the ambient perspective.

Although it is possible to account for some of geometric structure in the ambient perspective, e.g., the curved nature of manifold via Riemannian logarithm map, several issues arise due to manipulation of geometric objects such as tangent vectors in the ambient space. First, the essential dependence on an ambient space restricts potential applications. It is not immediately applicable to manifolds that are not a Euclidean submanifold or do not have a natural isometric embedding into a Euclidean space, e.g., the Riemannian manifold of () SPD matrices endowed with the affine-invariant metric (Moakher, 2005) which is not compatible with the -dimensional Euclidean metric. Second, although an ambient space provides a common stage for tangent vectors at different points, operation on tangent vectors from this ambient perspective can potentially violate the intrinsic geometry of the manifold. To illustrate this, consider comparison of two tangent vectors at different points (this comparison is needed in the asymptotic analysis of Section 3.2; see also Section 2.4). From the ambient perspective, taking difference of tangent vectors requires moving a tangent vector parallelly within the ambient space to the base point of the other tangent vector. However, the resultant tangent vector after movement in the ambient space is generally not a tangent vector for the base point of the other tangent vector; see the left panel of Figure 1 for a geometric illustration. In another word, the ambient difference of two tangent vectors at different points is not an intrinsic geometric object on the manifold, and the departure from intrinsic geometry can potentially affect the statistical efficacy and/or efficiency. Lastly, since manifolds might be embedded into more than one ambient space, the interpretation of statistical results crucially depends on the ambient space and could be misleading if one does not choose the ambient space appropriately.

In the paper, we develop a completely intrinsic framework that provides a foundational theory for general Riemannian functional data that paves the way for the development of intrinsic Riemannian functional principal component analysis and intrinsic Riemannian functional linear regression, among other potential applications. The key building block is a new concept of tensor Hilbert space along a curve on the manifold, which is described in Section 2

. On one hand, our approach experiences dramatically elevated technical challenge relative to the ambient counterparts. For example, without an ambient space, it is nontrivial to perceive and handle tangent vectors. On the other hand, the advantages of the intrinsic perspective are at least three-fold, in contrast to ambient approaches. First, our results immediately apply to many important Riemannian manifolds that are not naturally a Euclidean submanifold but commonly seen in statistical analysis and machine learning, such as the aforementioned SPD manifolds and Grassmannian manifolds. Second, our framework features a novel intrinsic proposal for coherent comparison of objects from different tensor Hilbert spaces on the manifold, and hence makes the asymptotic analysis sensible. Third, results produced by our approach are invariant to embeddings and ambient spaces, and can be interpreted independently, which avoid potential misleading interpretation in practice.

As important applications of the proposed framework, we develop intrinsic Riemannian functional principal component analysis (iRFPCA) and intrinsic Riemannian functional linear regression (iRFLR). Specifically, estimation procedures of intrinsic eigenstructure are provided and their asymptotic properties are investigated within the intrinsic geometry. For iRFLR, we study a Riemannian functional linear regression model, where a scalar response intrinsically and linearly depends on a Riemannian functional predictor through a Riemannian slope function, a concept that is formulated in Section 4, along with the concept of linearity in the context of Riemannian functional data. We present an FPCA-based estimator and a Tikhonov estimator for the Riemannian slope function and explore their asymptotic properties, where the proposed framework of tensor Hilbert space again plays an essential role.

The rest of the paper is structured as follows. The foundational framework for intrinsic Riemannian functional data analysis is laid in Section 2. Intrinsic Riemannian functional principal component analysis is presented in Section 3, while intrinsic Riemannian functional regression is studied in Section 4. In Section 5, numerical performance is illustrated through simulations, and an application to Human Connectome Project analyzing functional connectivity and behavioral data is provided.

## 2 Tensor Hilbert Space and Riemannian Random Process

In this section, we first define the concept of tensor Hilbert space and discuss its properties, including a mechanism to deal with two different tensor Hilbert spaces at the same time. Then, random elements on tensor Hilbert space are investigated, with the proposed intrinsic Karhunen-Loève expansion for the random elements. Finally, practical computation with respect to an orthonormal frame is given. Throughout this section, we assume a -dimensional, connected and geodesically complete Riemannian manifold equipped with a Riemannian metric , which defines a scalar product for the tangent space at each point . This metric also induces a distance function on . A preliminary for Riemannian manifolds can be found in the appendix. For a comprehensive treatment on Riemannian manifolds, we recommend the introductory text by Lee (1997) and also Lang (1995).

### 2.1 Tensor Hilbert Spaces along Curves

Let be a measurable curve on a manifold and parameterized by a compact domain equipped with a finite measure . A vector field along is a map from to the tangent bundle such that for all . It is seen that the collection of vector fields along is a vector space, where the vector addition between two vector fields and is a vector field such that for all , and the scalar multiplication between a real number and a vector field is a vector field such that for all . Let be the collection of (equivalence classes of) measurable vector fields along such that with identification between and in (or equivalently, and are in the same equivalence class) when . Then is turned into an inner product space by the inner product , with the induced norm given by . Moreover, we have that

###### Theorem 1.

For a measurable curve on , is a separable Hilbert space.

We call the space the tensor Hilbert space along , as tangent vectors are a special type of tensor and the above Hilbertian structure can be defined for tensor fields along . The above theorem is of paramount importance, in the sense that it suggests to serve as a cornerstone for Riemannian functional data analysis for two reasons. First, as shown in Section 2.2, via Riemannian logarithm maps, a Riemannian random process may be transformed into a tangent-vector-valued random process (called log-process in Section 2.2) that can be regarded as a random element in a tensor Hilbert space. Second, the rigorous theory of functional data analysis formulated in Hsing and Eubank (2015) by random elements in separable Hilbert spaces fully applies to the log-process.

One distinct feature of the tensor Hilbert space is that, different curves that are even parameterized by the same domain give rise to different tensor Hilbert spaces. In practice, one often needs to deal with two different tensor Hilbert spaces at the same time. For example, in the next subsection we will see that under some conditions, a Riemannian random process can be conceived as a random element on the tensor Hilbert space along the intrinsic mean curve . However, the mean curve is often unknown and estimated from a random sample of . Since the sample mean curve generally does not agree with the population one, quantities of interest such as covariance operator and their sample versions are defined on two different tensor Hilbert spaces and , respectively. For statistical analysis, one needs to compare the sample quantities with their population counterparts and hence involves objects such as covariance operators from two different tensor Hilbert spaces.

In order to intrinsically quantify the discrepancy between objects of the same kind from different tensor Hilbert spaces, we utilize the Levi-Civita connection (p. 18, Lee 1997) associated with the Riemannian manifold . The Levi-Civita connection is uniquely determined by the Riemannian structure. It is the only torsion-free connection compatible with the Riemannian metric. Associated with this connection is a unique parallel transport operator that smoothly transports tangent vectors at along a curve to and preserves the inner product. We shall emphasize that, the parallel transportation is performed intrinsically. For instance, tangent vectors being transported always stay tangent to the manifold during transportation, which is illustrated by the right panel of Figure 1. Although transport operator depends on the curve connecting and , there exists a canonical choice of the curve connecting two points, which is the minimizing geodesic between and (under some conditions, almost surely the minimizing geodesic is unique between two points randomly sampled from the manifold). The smoothness of parallel transport also implies that if and are not far apart, then the initial tangent vector and the transported one stays close (in the space of tangent bundle endowed with the Sasaki metric (Sasaki, 1958)). This feature is desirable for our purpose, as when sample mean approaches to , one expects a tangent vector at converges to its transported version at . Owing to these nice properties of parallel transport, it becomes an ideal tool to construct a mechanism of comparing objects from different tensor Hilbert spaces as follows.

Suppose and are two measurable curves on defined on . Let be a family of smooth curves that is parameterized by the interval (the way of parameterization here does not matter) and connects to , i.e., and , such that is measurable for all . Suppose and let be a smooth vector field along such that and , where denotes the Levi-Civita connection of the manifold . The theory of Riemannian manifolds shows that such a vector field uniquely exists. This gives rise to the parallel transporter along , defined by . In other words, parallelly transports to along the curve . As the parallel transporter determined by the Levi-Civita connection, preserves the inner product of tangent vectors along transportation, i.e. for . Then we can define the parallel transport of vector fields from to , denoted by , for all and . One immediately sees that is a linear operator on tensor Hilbert space. Its adjoint, denoted by , is a map from to and is given by for and . Let denote the set of all Hilbert-Schmidt operators on , which is a Hilbert space with the Hilbert-Schmidt norm . We observe that the operator also gives rise to a mapping from to , defined by for and . The operator is called the parallel transporter of Hilbert-Schmidt operators. Below are some important properties of and , where for .

###### Proposition 2.

Suppose , , and for two measurable curves and on . Let and be parallel transporters along a family of smooth curves defined above, such that is smooth and is measurable. Then the following statements regarding and hold.

1. The operator is a unitary transformation from to .

2. . Also, .

3. .

4. If is invertible, then .

5. , where are scalar constants, and .

6. .

We define for and , and for operators and . To quantify the discrepancy between an element in and another one in , we can use the quantity . Similarly, we adopt as discrepancy measure for two covariance operators and . These quantities are intrinsic as they are built on intrinsic geometric concepts. In light of Proposition 2, they are symmetric under the parallel transport, i.e., transporting to yields the same discrepancy measure as transporting to . We also note that, when , the difference operators and reduce to the regular vector and operator difference, i.e., becomes , while becomes . Therefore, and can be viewed as generalization of the regular vector and operator difference to a Riemannian setting. One shall note that and depend on the choice of the family of curves , a canonical choice of which is discussed in Section 3.2.

### 2.2 Random Elements on Tensor Hilbert Spaces

Let be a Riemannian random process. In order to introduce the concept of intrinsic mean function for , we define a family of functions indexed by :

 F(p,t)=Ed2M(X(t),p),p∈M,t∈T. (1)

For a fixed , if there exists a unique that minimizes over all , then is called the intrinsic mean (also called Fréchet mean) at , denoted by , i.e.,

 μ(t)=argminp∈MF(p,t).

As required for intrinsic analysis, we assume the following condition.

[labelwidth=1.2cm,leftmargin=1.4cm,align=left]

A.1

The intrinsic mean function exists.

We refer readers to Bhattacharya and Patrangenaru (2003) and Afsari (2011)

for conditions under which the intrinsic mean of a random variable on a general manifold exists and is unique. For example, according to Cartan-Hadamard theorem, if the manifold is simply connected and complete with non-positive sectional curvature, then intrinsic mean function always exists and is unique as long as for all

, for some .

Since is geodesically complete, by Hopf-Rinow theorem (p. 108, Lee 1997), its exponential map at each is defined on the entire . As might not be injective, in order to define its inverse, we restrict to a subset of the tangent space . Let denote the set of all tangent vectors such that the geodesic fails to be minimizing for for each . Now, we define only on . The image of , denoted by , consists of points in , such that for some . In this case, the inverse of exists and is called Riemannian logarithm map, which is denoted by and maps to . We shall make the following assumption.

[labelwidth=1.2cm,leftmargin=1.4cm,align=left]

A.2

.

Then, is almost surely defined for all . The condition is superfluous if is injective for all , like the manifold of SPDs endowed with the affine-invariant metric.

In the sequel we shall assume satisfies conditions 2.2 and 2.2. An important observation is that the log-process (denoted by for short) is a random vector field along . If we assume continuity for the sample paths of , then the process is measurable with respect to the product -algebra and the Borel algebra , where is the

-algebra of the probability space. Furthermore, if

, then according to Theorem 7.4.2 of Hsing and Eubank (2015), can be viewed as a tensor Hilbert space valued random element. Observing that according to Theorem 2.1 of Bhattacharya and Patrangenaru (2003), the intrinsic covariance operator for is given by . This operator is nuclear and self-adjoint. It then admits the following eigendecomposition (Theorem 7.2.6, Hsing and Eubank, 2015)

 C=∞∑k=1λkϕk⊗ϕk (2)

with eigenvalues

and orthonormal eigenelements that form a complete orthonormal system for . Also, with probability one, the log-process of has the following Karhunen-Loève expansion

 LogμX=∞∑k=1ξkϕk (3)

with being uncorrelated and centered random variables. Therefore, we obtain the intrinsic Riemannian Karhunen-Loève (iRKL) expansion for given by

 X(t)=Expμ(t)∞∑k=1ξkϕk(t). (4)

The elements are called intrinsic Riemannian functional principal component (iRFPC), while the variables are called intrinsic iRFPC scores. This result is summarized in the following theorem whose proof is already contained in the above derivation and hence omitted. We shall note that the continuity assumption on sample paths can be weakened to piece-wise continuity.

###### Theorem 3 (Intrinsic Karhunen-Loève Representation).

Assume that satisfies assumptions 2.2 and 2.2. If sample paths of are continuous and , then the intrinsic covariance operator of admits the decomposition (2), and the random process admits the representation (4).

In practice, the series at (4) is truncated at some positive integer , resulting in a truncated intrinsic Riemannian Karhunen-Loève expansion of , given by with . The quality of the approximation of for is quantified by , and can be shown by a method similar to Dai and Müller (2017) that if the manifold has non-negative sectional curvature everywhere, then . For manifolds with negative sectional curvatures, such inequality in general does not hold. However, for Riemannian random process that almost surely lies in a compact subset of , the residual can be still bounded by up to a scaling constant.

###### Proposition 4.

Assume that conditions 2.2 and 2.2 hold, and the sectional curvature of is bounded from below by . Let be a subset of . If , we let , and if , we assume that is compact. Then, for some constant , for all . Consequently, if almost surely, then .

### 2.3 Computation in Orthonormal Frames

In practical computation, one might want to work with specific orthonormal bases for tangent spaces. A choice of orthonormal basis for each tangent space constitutes an orthonormal frame on the manifold. In this section, we study the representation of the intrinsic Riemannian Karhunen-Loève expansion under an orthonormal frame and formulas for change of orthonormal frames.

Let be a continuous orthonormal frame, i.e., each is a vector field of such that and for and all . At each point , form an orthonormal basis for . The coordinate of with respect to is denoted by , with the subscript indicating its dependence on the frame. The resulting process is called the -coordinate process of . Note that is a regular valued random process defined on , and classic theory in Hsing and Eubank (2015) applies to . For example, its norm is defined by , where denotes the canonical norm on . One can show that . Therefore, if , then the covariance function exists and is matrix-valued, quantified by (Kelly and Root, 1960; Balakrishnan, 1960), noting that as for all . Also, the vector-valued Mercer’s theorem implies the eigendecomposition

 CE(s,t)=∞∑k=1λkϕE,k(s)ϕTE,k(t), (5)

with eigenvalues

and corresponding eigenfunctions

. Here, the subscript in is to emphasize the dependence on the chosen frame. One can see that is a coordinate representation of , i.e., .

The coordinate process admits the vector-valued Karhunen-Loève expansion

 ZE(t)=∞∑k=1ξkϕE,k(t) (6)

under the assumption of mean square continuity of , according to Theorem 7.3.5 of Hsing and Eubank (2015), where . While the covariance function and eigenfunctions of depend on frames, and in (4) and (6) are not, which justifies the absence of in their subscripts and the use of same notation for eigenvalues and iRFPC scores in (2), (4), (5) and (6). This follows from the formulas for change of frames that we shall develop below.

Suppose is another orthonormal frame. Change from to

can be characterized by a unitary matrix

. For example, and hence for all . Then the covariance function of is given by

 CA(s,t) =E{ZA(s)ZA(t)T}=E{Oμ(s)ZE(s)ZE(t)TOTμ(t)}=Oμ(s)CE(s,t)OTμ(t), (7)

and consequently,

 CA(s,t)=∞∑k=1λk{Oμ(s)ϕE,k(s)}{Oμ(t)ϕE,k(t)}T.

From the above calculation, we immediately see that are also eigenvalues of . Moreover, the eigenfunction associated with for is given by

 ϕA,k(t)=Oμ(t)ϕE,k(t). (8)

Also, the variable in (4) and (6) is the functional principal component score for associated with , as seen by . The following proposition summarizes the above results.

###### Proposition 5 (Invariance Principle).

Let be a -valued random process satisfying conditions 2.2 and 2.2. Suppose and are measurable orthonormal frames that are continuous on a neighborhood of the image of , and denotes the -coordinate log-process of . Assume is the unitary matrix continuously varying with such that for .

1. The -norm of for , defined by , is independent of the choice of frames. In particular, for all orthonormal frames .

2. If , then the covariance function of exists for all and admits decomposition of (5). Also, (2) and (5) are related by for all , and the eigenvalues coincide. Furthermore, the eigenvalues of and the principal component scores of Karhunen-Loève expansion of do not depend on .

3. The covariance functions and of respectively and , if exist, are related by (7). Furthermore, their eigendecomposions are related by (8) and for all .

4. If and sample paths of are continuous, then the scores (6) coincides with the iRFPC scores in (4).

We conclude this subsection by emphasizing that the concept of covariance function of the log-process depends on the frame , while the covariance operator, eigenvalues, eigenelements and iRFPC scores do not. In particular, the scores , which are often the input for further statistical analysis such as regression and classification, are invariant to the choice of coordinate frames. An important consequence of the invariance principle is that, these scores can be safely computed in any convenient coordinate frame without altering the subsequent analysis.

### 2.4 Connection to the Special Case of Euclidean Submanifolds

Our framework applies to general manifolds that include Euclidean submanifolds as special examples to which the methodology of Dai and Müller (2017) also applies. When the underlying manifold is a -dimensional submanifold of the Euclidean space with , we recall that the tangent space at each point is identified as a -dimensional linear subspace of . For such Euclidean manifolds, Dai and Müller (2017) treat the log-process of as a -valued random process, and derive the representation for the log-process (Eq. (5) in their paper) within the ambient Euclidean space. This is distinctly different from our intrinsic representation (3) based on the theory of tensor Hilbert spaces, despite their similar appearance. For instance, Eq. (5) in their work can only be defined for Euclidean submanifolds, while ours is applicable to general Riemannian manifolds. Similarly, the covariance function defined in Dai and Müller (2017), denoted by , is associated with the ambient log-process , i.e., . Such an ambient covariance function can only be defined for Euclidean submanifolds but not general manifolds.

Nevertheless, there are connections between the ambient method of Dai and Müller (2017) and our framework when is a Euclidean submanifold. For instance, the mean curve is intrinsically defined in the same way in both works. For the covariance structure, although our covariance function is a matrix-valued function while is a matrix-valued function, they both represent the intrinsic covariance operator when is a Euclidean submanifold. To see so, first, we observe that the ambient log-process as defined in Dai and Müller (2017) at the time , although is ambiently -dimensional, lives in a -dimensional linear subspace of . Second, the orthonormal basis for the tangent space at can be realized by a full-rank matrix by concatenating vectors . Then is the -coordinate process of . This implies that . On the other hand, since , one has . Thus, and determine each other and represent the same object. In light of this observation and the invariance principle stated in Proposition 5, when is a Euclidean submanifold, can be viewed as the ambient representation of the intrinsic covariance operator , while is the coordinate representation of with respect to the frame . Similarly, the eigenfunctions of are the ambient representation of the eigenelements of . The above reasoning also applies to sample mean functions and sample covariance structure. Specifically, when is a Euclidean submanifold, our estimator for the mean function discussed in Section 3 is identical to the one in Dai and Müller (2017), while the estimators for the covariance function and eigenfunctions proposed in Dai and Müller (2017) are the ambient representation of our estimators stated in Section 3.

However, when quantifying the discrepancy between the population covariance structure and its estimator, Dai and Müller (2017) adopt the Euclidean difference as a measure. For instance, they use to represent the discrepancy between the sample eigenfunctions and the population eigenfunctions, where is the sample version of . When , the sample version of , is not equal to , and belong to different tangent spaces. In such case, the Euclidean difference is a Euclidean vector that does not belong to the tangent space at either or , as illustrated in the left panel of Figure 1. In other words, the Euclidean difference of ambient eigenfunctions does not obey the geometry of the manifold, hence might not properly measure the intrinsic discrepancy. In particular, the measure might completely result from the departure of the ambient Euclidean geometry from the manifold, rather than the intrinsic discrepancy between the sample and population eigenfunctions, as demonstrated in the left panel of Figure 1. Similar reasoning applies to . In contrast, we base on Proposition 2 to propose an intrinsic measure to characterize the intrinsic discrepancy between a population quantity and its estimator in Section 3.2.

## 3 Intrinsic Riemannian Functional Principal Component Analysis

### 3.1 Model and Estimation

Suppose admits the intrinsic Riemannian Karhunen-Loève expansion (4), and are a random sample of . In the sequel, we assume that trajectories are fully observed.

In the case that data are densely observed, each trajectory can be individually interpolated by using regression techniques for manifold valued data, such as

Steinke, Hein and Schölkopf (2010), Cornea et al. (2017) and Petersen and Müller (2017). This way the densely observed data could be represented by their interpolated surrogates, and thus treated as if they were fully observed curves. When data are sparse, delicate information pooling of observations across different subjects is required. The development of such methods is substantial and beyond the scope of this paper.

In order to estimate the mean function , we define the finite-sample version of in (1) by

 Fn(p,t)=1nn∑i=1d2M(Xi(t),p).

Then, an estimator for is given by

 ^μ(t)=argminp∈MFn(p,t).

The computation of depends on the Riemannian structure of the manifold. Readers are referred to Cheng et al. (2016) and Salehian et al. (2015) for practical algorithms. For a subset of , denotes the set , where is the ball with center and radius in . We use to denote the set . In order to define , at least with a dominant probability for a large sample, we shall assume a slightly stronger condition than 2.2:

[labelwidth=1.2cm,leftmargin=1.4cm,align=left]

A.2

There is some constant such that .

Then, combining the fact that we will show later, we conclude that for a large sample, almost surely, for all . Therefore, under this condition, is well define almost surely for a large sample.

The intrinsic Riemannian covariance operator is estimated by its finite-sample version

 ^C=1nn∑i=1(Log^μXi)⊗(Log^μXi).

This sample intrinsic Riemannian covariance operator also admits an intrinsic eigendecomposion for . Therefore, the estimates for the eigenvalues are given by , while the estimates for are given by . These estimates can also be conveniently obtained under a frame, due to the invariance principle stated in Proposition 5. Let be a chosen orthonormal frame and be the sample covariance function based on , where is the coordinate process of under the frame with respect to . We can then obtain the eigendecomposition , which yields for . Finally, the truncated process for is estimated by

 ^X(K)i=Exp^μK∑k=1^ξik^ϕk, (9)

where are estimated iRFPC scores. The above truncated iRKL expansion can be regarded as generalization of the representation (10) in Dai and Müller (2017) from Euclidean submanifolds to general Riemannian manifolds.

### 3.2 Asymptotic Properties

To quantify the difference between and , it is natural to use the square geodesic distance as a measure of discrepancy. For the asymptotic properties of , we need following regularity conditions.

[labelwidth=1.2cm,leftmargin=1.4cm,align=left]

B.1

The manifold is connected and complete. In addition, the exponential map is surjective at every point .

B.2

The sample paths of are continuous.

B.3

is finite. Also, for all compact subsets , .

B.4

The image of the mean function is bounded, i.e the diameter is finite, .

B.5

For all , .

To state the next condition, let . The calculus of manifolds suggests that , where denotes the gradient operator at . For each , let denote the Hessian of the real function , i.e., for vector fields and on ,

 ⟨HtU,W⟩(p)=⟨−∇UVt,W⟩(p)=Hessp(12d2M(p,X(t)))(U,W).

[labelwidth=1.2cm,leftmargin=1.4cm,align=left]

B.6

, where denotes the smallest eigenvalue of an operator or matrix.

B.7

and , where for a real function on .

The assumption 3.2 regarding the property of manifolds is met in general, e.g. the -dimensional unit sphere , SPD manifolds, etc. By Hopf-Rinow theorem, the condition also implies that is geodesically complete. Conditions similar to 3.2, 3.2, 3.2 and 3.2 are made in Dai and Müller (2017). The condition 3.2 is a weak requirement for the mean function and is automatically satisfied if the manifold is compact, while 3.2

is analogous to standard moment conditions in the literature of Euclidean functional data analysis and becomes superfluous when

is compact. If , then 3.2 is equivalent to , a condition commonly made in the literature of functional data analysis, e.g., Hall and Hosseini-Nasab (2006). The following theorem summarizes asymptotic properties of for general Riemannian manifolds. Its proof can be obtained by mimicking the techniques in Dai and Müller (2017), with additional care of the case that is noncompact. For completeness, we provide its proof in Appendix. As noted by Dai and Müller (2017), the root- rate can not be improved in general.

###### Theorem 6.

Under conditions 2.2, 3.1 and 3.2-3.2, the following holds.

1. is uniformly continuous, and is uniformly continuous with probability one.

2. is a uniformly strong consistent estimator for , i.e., .

3. If additional conditions 3.2-3.2 are assumed, then converges in distribution to a Gaussian measure on the tensor Hilbert space .

4. If additional conditions 3.2-3.2 are assumed, then and .

For asymptotic analysis of the estimated eigenstructure, as discussed in Section 2.1 and 2.4, and are not defined for intrinsic Riemannian functional data analysis, since they are objects originated from different tensor Hilbert spaces and induced by different curves and . Therefore, we shall adopt the intrinsic measure of discrepancy developed in Section 2.1, namely, and , where the dependence of and on and is suppressed for simplicity. As mentioned at the end of Section 2.1, both and depend on the choice of the family of curves . Here, a canonical choice for each is the minimizing geodesic between and . The existence and uniqueness of such geodesics can be deduced from assumptions 2.2 and 3.1. Also, the continuity of and implies the continuity of and hence the measurability of for each . By Proposition 2, one sees that , recalling that is a vector field along . It can also be seen that are eigenpairs of