Geometric Learning and Filtering in Finance

10/16/2017 ∙ by Anastasis Kratsios, et al. ∙ Concordia University 0

We develop a method for incorporating relevant non-Euclidean geometric information into a broad range of classical filtering and statistical or machine learning algorithms. We apply these techniques to approximate the solution of the non-Euclidean filtering problem to arbitrary precision. We then extend the particle filtering algorithm to compute our asymptotic solution to arbitrary precision. Moreover, we find explicit error bounds measuring the discrepancy between our locally triangulated filter and the true theoretical non-Euclidean filter. Our methods are motivated by certain fundamental problems in mathematical finance. In particular we apply these filtering techniques to incorporate the non-Euclidean geometry present in stochastic volatility models and optimal Markowitz portfolios. We also extend Euclidean statistical or machine learning algorithms to non-Euclidean problems by using the local triangulation technique, which we show improves the accuracy of the original algorithm. We apply the local triangulation method to obtain improvements of the (sparse) principal component analysis and the principal geodesic analysis algorithms and show how these improved algorithms can be used to parsimoniously estimate the evolution of the shape of forward-rate curves. While focused on financial applications, the non-Euclidean geometric techniques presented in this paper can be employed to provide improvements to a range of other statistical or machine learning algorithms and may be useful in other areas of application.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Non-Euclidean geometry occurs naturally in problems in finance. Short-rate models consistent with finite-dimensional Heath-Jarrow-Morton (HJM) models are characterized using Lie group methods, in [8]. In [29, 30], highly accurate stochastic volatility model estimation methods are derived using Riemannian heat-kernel expansions. In [20], the equivalent local martingale measures (ELMMs) of finite-dimensional term-structure models for zero-coupon bonds are characterized using the smooth manifold structure associated with factor models for the forward-rate curve. In [11]

, information-geometric techniques for yield-curve modeling which consider finite-dimensional manifolds of probability densities are developed. In

[26, 25], Riemannian geometric approaches to stochastic volatility models and covariance matrix prediction are employed to successfully predicts stock prices. In [37], it is shown that considering a relevant geometric structures on a mathematical finance problem leads to more accurate out-of-sample forecasts. The superior forecasting power of non-Euclidean methods is interpreted as encoding information present in mathematical finance problems which is otherwise overlooked by the classical Euclidean methods. Each of these methodologies approach distinct problems in mathematical finance using differential geometry.

Conditional expectation and stochastic filtering are some of the most fundamental tools used in applied probability and finance. Geometric formulations of conditional expectation, such as those used in [47, 44] are solutions to non-convex optimization problems. The non-convexity of the problem makes computation of these formulations of non-Euclidean conditional expectations difficult or intractable.

Non-Euclidean filtering formulations such as those of [16], [41], or [18] assume that the signal and/or noise processes are non-Euclidean and estimate functionals of the noisy signal using the classical Euclidean conditional expectation. In [44] dynamics for the intrinsic conditional expectation of a manifold-valued signal was found, using the Le Jan-Watanabe connection. This connection reduced the intrinsic non-Euclidean filtering problem to a Euclidean filtering problem. However, the authors of [44] remark that implementing their results may be intractable due to the added complexity introduced by the Le Jan-Watanabe connection.

This paper presents an alternative computationally tractable characterization of intrinsic conditional expectation, called geodesic conditional expectation, and uses it to produce a computable solution to a non-Euclidean filtering problem similar to that of [44]. The implementation is similar to [47] for a non-Euclidean particle filter. However, in [47] the convergence of the algorithm to the non-Euclidean conditional expectation is left unjustified. The geodesic conditional expectation expresses the intrinsic conditional expectation as a limit of certain transformations of Euclidean conditional expectations associated to the non-Euclidean signal process. Analogous to [44]

, these transformations reduce the computation of the non-Euclidean problem to the computation of a Euclidean problem, with the central difference being that the required transformations are available in closed form. The infinitesimal linearization transformations considered here are similar to those empirically postulated in the engineering, computer-vision, and control literature in

[23, 22, 26, 28, 1, 47].

The paper is organized as follows. Section introduces the necessary notation and the general terminology through the paper. In Section , the relationship between portfolio selection and non-Euclidean geometry is introduced, and elements of Riemannian geometry are reviewed through the lens of the space of efficient portfolios. Section introduces two natural generalizations of conditional expectation to the non-Euclidean setting. Both formulations are shown to be equivalent in Theorem 4.6. Corollary 4.7 provides non-Euclidean filtering equations which describe the dynamics of the non-Euclidean expectations. Using Corollary 4.7, Section returns to the space of efficient portfolios and numerically illustrates how efficient portfolios, on historical stock data, can be more precisely forecasted by incorporating geometric features into the estimation procedure. Our filtering procedure is benchmarked against other intrinsic filtering algorithms from the engineering and computer vision literature. Section reviews the contributions of this paper.

2 Preliminaries and Notation

In this paper denotes a complete stochastic base on which independent Brownian motions, denoted by and are defined. Furthermore, will denote a sub-filtration of

. The vector-valued conditional expectation will be denoted by

.

The measure will denote the Lebesgue measure, will denote the Bochner-Lebesgue spaces for -measurable -valued functions with respect to the -tuples of Lebesgue measure . If , the Bochner-Lebesgue spaces will be abbreviated by . For a Riemannian manifold , the intrinsic measure is denoted by and the induced distance function is denoted by . The disjoint union, or coproduct, of topological spaces will be denoted by . The set of càdlàg paths from into the metric space induced by , is defined by .

The next section motivates the geometries studied in this paper by introducing and discussing the geometry of efficient portfolios.

3 The Geometry of Efficient Portfolios

A fundamental problems in mathematical finance is choosing an optimal portfolio. Typically, in modern portfolio theory, a portfolio is comprised of

predetermined risky assets and a riskless asset. Efficient portfolios are portfolios having the greatest return but not exceeding a fixed level of risk. Classically, the return level is measured by the portfolio’s expected (log)-returns. The portfolio’s risk is quantified as the portfolio’s variance. The optimization problem defining efficient portfolios may be defined in a number of ways, the one considered in this paper is the following Sharpe-type ratio

(3.1)

Here is the vector of portfolio weights expressed as the proportion of wealth invested in each risky asset, is the vector of the expected log-returns of the risky assets, is the covariance matrix of those log-returns, is a parameter balancing the objectives of maximizing the portfolio return versus minimizing the portfolio variance, is the vector with all its components equal to , and indicates matrix transpose operation. If is not degenerate, the unique optimal solution to equation (3.1) is

(3.2)

The particular case where is set to is the minimum variance portfolio of [38]. The minimum-variance portfolio may also be derived by minimizing the portfolio variance subject to the budget constraint . By adding a risk-free asset to the portfolio, one can derive similar expressions for the market portfolio and the capital market line (for more details on this approach to portfolio theory see [5]).

Unlike the returns vector

, a portfolio’s covariance matrix is not meaningfully represented in Euclidean space. That is, a covariance matrix does not scale linearly and the difference of covariance matrices need not be a covariance matrix. Therefore, forecasting a future covariance matrix, even through a simple technique such as linear regression directly to the components of

, can lead to meaningless forecasts. Using the intrinsic geometry of the set of positive-definite matrices, denoted by , avoids these issues.

The space , has a well studied and rich geometry lying at the junction of Cartan-Hadamard geometry and Lie theory. Empirical exploitation of this geometry has found many applications in mathematical imaging (see [39]), computer vision (see [42]), and signal processing (see [3]). Moreover, connections between this geometry and information theory have been explored in [46], linking it to the Cramer-Rao lower bound.

The set is smooth and comes equipped with a natural infinitesimal notion of distance called Riemannian metric. Denoted by , the Riemannian metric on quantifies the difference in making infinitesimal movements in Euclidean space along to making infinitesimal movements with respect to the geometry of . The description of Riemannian manifolds as subsets of Euclidean space is made rigorous by Nash in the embedding theorem in [40]. Distance between two points on is quantified by the length of the shortest path connecting the two points, called a geodesic. On , any two points can always be joined by geodesic. The distance function taking two points to the length of the unique most efficient curve joining them can be expressed as

(3.3)

The function makes into a complete metric space, where the distance between two points corresponds exactly to the length of the unique distance minimizing geodesic connecting them. Where, is the Frobenius norm, which first treats a matrix as a vector and subsequently computes its Euclidean norm, is the matrix square-root operator, is the matrix logarithm, and denotes eigenvalue of . Both the log and operators are well-defined on .

The disparity between the distance measurements is explained by the intrinsic curvature of . Sectional curvature is a formalism for describing curvature intrinsically to a space, such as . It is measured by sliding a plane tangentially to geodesic paths and measuring the twisting and turning undergone by that tangential plane. A detailed measurement of shows that its sectional curvature is everywhere non-positive. This means that locally the space is locally curved somewhat between a pseudo-sphere and Euclidean space. Alternatively this can be described by stating that nowhere bulges out like a circle but is instead puckered in or flat.

A smooth subspace of Euclidean space having everywhere non-positive curvature when equipped with a Riemannian metric, and for which every pair of points can be joined by a unique distance minimizing geodesic is called a Cartan-Hadamard manifold. These spaces posses many well-behaved properties, as studied in [2], but for this discussion the most relevant property of Cartan-Hadamard manifolds to this paper is the existence of a smooth map from onto . Here is the Euclidean space of equal dimension to . For every fixed input, this map is infinitely differentiable, has an infinitely differentiable inverse and therefore puts in smooth correspondence with . The map is called the Riemannian Logarithm. It is related to the distance between two covariance matrices through

(3.4)

The Riemannian Exponential map, denoted by , is the inverse of . The Riemannian Exponential map takes a covariance matrix and a tangential velocity vector to , and maps it to the covariance matrix , found by traveling along at the most efficient path beginning at with initial velocity and stopping the movement after one time unit. Geodesics on are obtained by scaling the initial velocity vector in the map, which is expressed as

(3.5)

where is the matrix exponential.

Returning to portfolio theory, any efficient portfolio in the sense of equation (3.2), is entirely characterized by the log-returns, the non-degenerate covariance structure between the risky assets, and the risk-aversion level. The space parameterizing all the efficient portfolios, which will be called the Markowitz space after [38], has a natural geometric structure.

Definition 3.1 (Markowitz Space)

Let and be the Euclidean Riemannian metrics on , , and the Riemannian metric on . The Riemannian manifold

is called the (-dimensional) Markowitz space.

Proposition 3.2 (Select Properties of the Markowitz Space).

The Markowitz space is connected, of non-positive curvature, and its associated metric space is complete. The distance function is

(3.6)

The Riemannian and maps on are of the form

(3.7)

Note that the Riemannian exponential and logarithm maps are defined everywhere and put in a smooth to correspondence with .

Proof.

The proof is deferred to the appendix. ∎

The Markowitz space serves as the prototypical example of the geometric spaces considered in the rest of this paper, these are Riemannian manifolds, of non-positive curvature, for which every two points can be joined by a unique distance minimizing geodesic. In the remainder of this paper, all Riemannian manifolds will be Cartan-Hadamard manifolds. Cartan-Hadamard manifolds appear in many places in mathematical finance, for example in [30] the natural geometry associated with stochastic volatility models with two driving factors are Cartan-Hadamard manifolds.

On Cartan-Hadamard manifolds, such as the Markowitz space, there is no rigorously defined notion of conditional expectation. Therefore rigorous estimation intrinsic to these spaces’ geometries is still a generally unsolved problem. We motivate this problem by discussing a few formulations of intrinsic conditional expectation and related empirical techniques present in the mathematical imaging literature.

The least-squares formulation of conditional expectation is

Replacing the expected Euclidean distance by the expected intrinsic distance gives the typical formulation of a non-Euclidean conditional expectation. This formulation will be referred to as intrinsic conditional expectation.

Alternatively, estimates in a Riemannian manifold are made by locally linearizing the data using the Riemannian log map, performing the estimate in Euclidean space, and returning the data back onto the manifold. This type of methodology has been used extensively in the computer vision and mathematical imaging literature by [23, 26, 28, 1], and [47]. In [47], the authors empirically support estimating the intrinsic conditional expectation a following procedure which first linearizes the observation using the Riemannian Log transform, subsequently computes the conditional expectation in Euclidean space, and lastly returns the prediction onto the Riemannian manifold using the Riemannian Exp map.

This paper provides a rigorous framework for the two methods described above, proves the existence of their optimum, and shows that both formulations agree. The rigorous formulation of the non-Euclidean filtering algorithm of [47] is used to derive non-Euclidean filtering equations. The non-Euclidean filtering problem is implemented and used to accurately forecast efficient portfolios by exploiting the geometry of the Markowitz space.

Empirical evidence for the importance of considering non-Euclidean geometry will be examined in the next section before developing a general theory of non-Euclidean conditional expectation in Section 4.

4 Non-Euclidean Conditional Expectations and Intrinsic Forecasting

Let denote the vector-valued conditional expectation in . Let and consider

(4.1)

The first equality is obtained by taking the limit of a constant sequence and the second line it achieved using the -measurability of and the linearity of conditional expectation. The last line of equation (4.1) is obtained by using the fact that the Riemannian Exponential and Logarithm maps in Euclidean space respectively correspond to addition and subtraction.

Equation (4.1) expresses the conditional expectation at time as moving from the conditional expectation at an arbitrarily close past time along a straight line with initial velocity given determined by the position of and the last computed conditional expectation. The past time-period is made arbitrarily small by taking the limit .

Equation (4.1) may be generalized and taken to be the definition of conditional expectation in the general Cartan-Hadamard manifold setting. In general, this definition will rely on a particular non-anticipative pathwise extension of a process. The definition of this pathwise extension is similar to the horizontal path extensions introduced in [17]. The extension of a process holds the initial realized value constant back to and the time value constant all the way to . Formally, is defined pathwise by

Figure 1: Extension of the process .

The next assumption will be made to ensure that the initial conditional probability laws exist on .

Assumption 4.1

Suppose that is -measurable and is absolutely continuous with respect to the intrinsic measure on . Denote its density by , and assume that there exists at-least one point in such that the integral is finite.

Definition 4.2 (Geodesic Conditional Expectation)

Let be an -valued càdlàg process and be a sub-filtration of . The geodesic conditional expectation of given , denoted by is defined to be the solution to the recursive system

(4.2)

where is the -optional projection.

The geometric intuition behind equation (4.2) is that the current geodesic conditional expectation at time is computed by first predicting the infinitesimal velocity describing the current state on from the previous estimate at time , and then moving across the infinitesimal geodesic along in that direction. The computational implication of equation (4.2) is that all the classical tools for computing the classical Euclidean conditional expectation may be used to compute the geodesic conditional expectation, once the Riemannian Exp and Riemannian Log maps are computed.

Lemma 4.3 (Existence of Initial Condition).

Under Assumption 4.1, exists and is -a.s unique.

Proof.

Under Assumption 4.1, [2, Exercise 5.11] guarantees the existence of . ∎

Geodesic conditional expectation is an atypical formulation of non-Euclidean conditional expectation. Typically, non-Euclidean conditional expectation is defined as the -valued random element minimizing the expected intrinsic distance to .

Following [34], by first isometrically embedding into a large Euclidean space , the space is subsequently defined as the subset of the Bochner-Lebesgue space consisting of the equivalence classes of measurable maps which are -a.s. supported on , and for which there exists some for which

(4.3)

The set is a Banach manifold (see [43] for more general results).

Definition 4.4 (Intrinsic Conditional Expectation)

The intrinsic conditional expectation with respect to the -subalgebra of of an -valued stochastic process is defined as the optimal Bayesian action

When , we will simply write .

Intuition about intrinsic conditional expectation is gained by turning to the Markowitz space.

Example 4.5.

Let be fixed and constant. Let be a process taking values in the Markowitz space, equation (3.6). Then the intrinsic conditional expectation of given is

(4.4)

The conditional expectation intrinsic to the Markowitz space seeks portfolio weights which give the most likely log-returns given the information in , while penalizing for the variance taken on by following that path.

In the case where is independent of and is -measurable, equation (4.4) simplifies. Since does not depend on and the latter is in , may be substituted into the second term, which sets it to zero. Therefore, in this simplified scenario the least-squares property of Euclidean conditional expectation (see [32, Page 80]) that

(4.5)

There is a natural topology defined on which is characterized as being the weakest topology on which sequences of cádl’ag process process in converge to in (see A.2 for a rigorous discussion). For any two elements and of with this topology, we will write

if and are indistinguishable in this topology. Intuitively, this means that they cannot be further separated in the topology. For example in two points are indistinguishable if and only if they are equal, the same is true for example in metric spaces. Whereas in the space of measurable functions from to itself which are square integrable equipped with its usual topology, two functions are inst indistinguishable if and only if they are equal on almost all points (see [33] for details on topological indistinguishability.)

Under mild assumptions, the geodesic conditional expectation and intrinsic conditional expectation agree on Cartan-Hadamard spaces as shown in the following theorem.

Theorem 4.6 (Unified Conditional Expectations).

Let be an -valued process with càdlàg paths which is in for -a.e. and is such that Assumptions 4.1 and A.7 hold. For , the intrinsic conditional expectation exists. Moreover, if , then

(4.6)

where the left-hand side of equation (4.6) is the intrinsic conditional expectation and its right-hand side is the geodesic conditional expectation.

Theorem 4.6 justifies the particle filtering algorithm of [47]. Before proving Theorem 4.6 and developing the required theory, a few implications and examples will be explored.

4.1 Filtering Equations

Theorem 4.6 has computational implications in terms of forecasting the optimal intrinsic conditional expectation using the geodesic conditional expectation. These implications are in the computable solution to the certain filtering problems.

Instead of discussing the dynamics of a coupled pair of -valued signal process and observation processes intrinsically to , Theorem 4.6 justifies locally linearizing and , then subsequently describing their Euclidean dynamics before finally returning them onto . More, specifically assume that

(4.7)

where and are independent Brownian motions and satisfy the usual existence and uniqueness conditions (see [12, Chapter 22.1] for example). This implies that depends on only itself and that depends only one and itself. In particular, this implies that

(4.8)

where is the filtration generated only by . Using these dynamics, asymptotic local filtering equations for the dynamics of the geodesic conditional expectation in terms of can be deduced and are summarized in the following Corollary of Theorem 4.6.

Corollary 4.7 (Asymptotic Non-Euclidean Filter).

Let , denote the coordinate of by , and suppose Assumptions 4.1 and A.7 as well as the assumptions on and made in [12, Chapter 22.1]. If is -a.e. unique, a version of the intrinsic conditional expectation must satisfy the SDE

(4.9)

where the limit is taken with respect to the metric topology on and the processes are defined by

Proof.

The proof is deferred to the appendix. ∎

Corollary 4.7 gives a way to use classical Euclidean filtering methods to obtain arbitrarily precise approximations to an SDE for the non-Euclidean conditional expectation. It is two-fold recursive as it requires the previous non-Euclidean conditional expectation to compute the next update. In practice, will be taken to be the previous asymptotic estimate.

The next section investigates the numerical performance of the non-Euclidean filtering methodology.

5 Numerical Results

To evaluate the empirical performance of the filtering equations of Corollary 4.7, 1000 successive closing prices ending on September 4 2018, for the Apple and Google stock are considered. The unobserved signal process is the covariance matrix between the closing prices at time and the observation process , is the empirical covariance matrix generated on -day moving windows.

The signal and observation processes and are assumed to be coupled by equation (4.7). The functions and are modeled as being deterministic linear functions and are modeled as being constants.

(5.1)

where A, B, C, H, and K are invertible diagonal matrices non-zero determinant.

Analogous dynamics are for the benchmark methods, ensuring that the Kalman filter is the solution to the stochastic filtering problem. The values of

and are estimated using maximum likelihood estimation.

Both the classical () and proposed methods (-) are also benchmarked against the non-Euclidean Kalman filtering algorithm of [28] (--). This algorithm proposes that the dynamics of and be modeled in Euclidean space using the transformations

where is the intrinsic Riemannian Barycenter (see [6] for details properties of the intrinsic mean), and the Riemannian Log and Exp functions are derived from the geometry of and not of , was chosen by sequential-validation. Unlike equations (4.7), the Riemannian log and exp maps are always performed about the same point and do not update. This will be reflected in the estimates whose performance progressively degrades over time.

The Riemannian Barycenter , is computed both intrinsically and extrinsically using the first empirical covariance matrices. The extrinsic Riemannian Barycenter on is defined to be the minimizer of

The extrinsic formulation of the Kalman filtering algorithm of [28] (--), models the linearized signal and observation processes by

The length of the moving window was calibrated in a way which maximized the performance of the standard Kalman-filter performed componentwise . The choice of observed covariance matrices used to compute the intrinsic mean was made by sequential validation on the initial of the data. The findings are reported in the following table.

N-KF 3.706e-01 3.366e-01 5.024e-01 4.613e-01 4.945e-01 4.540e-01
EUC 6.174e-01 5.507e-01 7.662e-01 6.863e-01 7.690e-01 6.890e-01
N-KF-int 5.724e-01 5.455e-01 7.769e-01 7.407e-01 7.776e-01 7.412e-01
N-KF-ext 5.244e-01 4.804e-01 7.078e-01 6.515e-01 7.051e-01 6.497e-01
Table 1: Efficient Portfolio One-Day Ahead Forecasts

Table 1 examines the one day ahead predictive power by evaluating the accuracy of the forecasted portfolio weights. N-KF is the proposed algorithm. N-KF-int is the algorithm of [28] based on the methods of [24], without the unscented transform. N-KF-int computes the Riemannian and maps where is the intrinsic mean to the first 15 observed covariance matrices and N-KF-ext is the same with the mean computed extrinsically (see [6] for a detailed study of intrinsic and extrinsic means on Riemannian manifolds). The one-day ahead predicted weights are evaluated both against the next day’s optimal portfolio weights using both the and norms for portfolios with the risk-aversion levels .

According to each of the performance metrics, the forecasted efficient portfolios using the intrinsic conditional expectation introduced in this paper performs best. An interpretation is that the Euclidean method disregards all the geometric structure, and that the competing non-Euclidean methods do not update their reference points for the and transformations. The failure to update the reference point results in progressively degrading performance. This effect is not as noticeable when the data is static as in [24, 23], however the time-series nature of the data makes the need to update the reference point for the transformations numerically apparent.

Frob. Max Modulus Inf. Spectral
N-KF 2.425e-04 1.988e-04 2.700e-04 2.345e-04
EUC 5.043e-04 4.041e-04 5.525e-04 4.772e-04
N-KF-int 4.321e-04 3.524e-04 4.817e-04 4.224e-04
N-KF-ext 5.342e-04 4.200e-04 6.006e-04 5.219e-04
Table 2: Comparison of Covariance Matrix Prediction

Table 2 examines the covariance matrix forecasts of all four methods directly. The performance metrics considered are the Frobenius, Maximum Modulus, Infinite and Spectral matrix norms of the difference between the forecasted covariance matrix and the realized future covariance matrix of the two stocks closing prices.

In Table 2, all the non-Euclidean methods out-perform the component-wise classical Euclidean forecasts of the one-day ahead predicted covariance matrix. The prediction of covariance matrices is less sensitive than that of the efficient portfolio weights, this is most likely due to term appearing in equation (3.2) which is sensitive to small changes due to the observably small value of .

95 L mean 95 U
Frob. 4.70e-04 5.04e-04 5.43e-04
Max. Mod. 3.73e-04 4.04e-04 4.37e-04
Inf. 5.13e-04 5.52e-04 5.99e-04
Spec. 4.42e-04 4.77e-04 5.16e-04
(a) Euclidean Kalman Filter
95 L mean 95 U
Frob. 2.02e-04 2.42e-04 3.04e-04
Max. Mod. 1.61e-04 1.98e-04 2.53e-04
Inf. 2.25e-04 2.70e-04 3.36e-04
Spec. 1.97e-04 2.34e-04 2.92e-04
(b) Asymptotic Non-Euclidean Kalman Filter
95 L mean 95 U
Frob. 3.91e-04 4.32e-04 4.68e-04
Max. Mod. 3.18e-04 3.52e-04 3.91e-04
Inf. 4.38e-04 4.81e-04 5.26e-04
Spec. 3.82e-04 4.22e-04 4.64e-04
(c) Non-Updating Intrinsic Barycenter
95 L mean 95 U
Frob. 4.94e-04 5.34e-04 5.75e-04
Max. Mod. 3.88e-04 4.20e-04 4.56e-04
Inf. 5.55e-04 6.00e-04 6.46e-04
Spec. 4.82e-04 5.21e-04 5.65e-04
(d) Non-Updating Extrinsic Barycenter
Table 3:

Bootstrapped Adjusted Confidence Intervals for Performance Metrics

Tables 2 and 3 report confidence intervals about the estimated mean of the one-day ahead mean error of each respective distance measure. The error distribution of the performance metrics is non-Gaussian according to the Shapiro-Wilks test performed for normality (see [45] for details). The bootstrap adjusted confidence (BAC) interval method of [15] is used instead to non-parametrically generate the

-confidence intervals. The BAC method is chosen since it does not assume that the underlying distribution is Gaussian, it corrects for bias, and it corrects for skewness in the data. The bootstrapping was performed by re-sampling

times from the realized error distributions of the performance metrics.

Tables 1 and 4 show that the N-KF method is the most accurate and has the lowest variance amongst all the methods according to the Frobenius, Maximum modulus, infinity, and spectral matrix norms.

95 L Mean 95 U
5.93e-01 6.17e-01 6.40e-01
5.32e-01 5.51e-01 5.70e-01
7.38e-01 7.66e-01 7.98e-01
6.59e-01 6.86e-01 7.14e-01
7.37e-01 7.69e-01 8.00e-01
6.61e-01 6.89e-01 7.15e-01
(a) Euclidean Kalman Filter
95 L Mean 95 U
3.49e-01 3.71e-01 3.92e-01
3.18e-01 3.37e-01 3.55e-01
4.71e-01 5.02e-01 5.33e-01
4.33e-01 4.61e-01 4.89e-01
4.66e-01 4.94e-01 5.25e-01
4.26e-01 4.54e-01 4.82e-01
(b) Non-Euclidean Kalman Filter
95 L Mean 95 U
5.51e-01 5.72e-01 5.95e-01
5.25e-01 5.45e-01 5.67e-01
7.45e-01 7.77e-01 8.07e-01
7.10e-01 7.41e-01 7.73e-01
7.45e-01 7.78e-01 8.13e-01
7.13e-01 7.41e-01 7.74e-01
(c) Non-Updating Intrinsic Barycenter
95 L Mean 95 U
6.70e-01 6.89e-01 7.07e-01
5.97e-01 6.14e-01 6.31e-01
8.21e-01 8.48e-01 8.76e-01
7.29e-01 7.55e-01 7.78e-01
8.15e-01 8.43e-01 8.70e-01
7.26e-01 7.51e-01 7.76e-01
(d) Non-Updating Extrinsic Barycenter
Table 4: Bootstrapped Adjusted Confidence Intervals for Performance Metrics of Portfolio Weights One-Day Ahead Predictions

Tables 1 and 2 reflect that the forecasting performance for the efficient portfolio weights of the N-KF method is more accurate than the others. This is again seen in the lower bias and tighter confidence interval reported in the Table 4. The numerics presented reflect the importance of incorporating relevant geometry to mathematical finance problems. The manner in which non-Euclidean geometry is incorporated in numerical procedures influences the effectiveness of the algorithms as indicated by the superior perfomance of the non-Euclidean Kalman filter over other benchmark methods.

The next section summarizes the contributions made in this paper.

6 Summary

In this paper we have considered non-Euclidean generalizations of conditional expectation which naturally model non-Euclidean features present in probabilistic models. The need to incorporate relevant geometric information into probabilistic estimation procedures within mathematical finance was motivated by the geometry of efficient portfolios. The connection between geometry and mathematical finance has also been explored in [7],[19, 21],[27], [29],[26],[4],[11], and [35, 36]. Non-Euclidean filtering was seen to outperform traditional Euclidean filtering methods with the estimates presenting lower prediction errors.

The numerical procedure was justified by Theorem 4.6 which proved the equivalence and existence of common formulations of intrinsic conditional expectation to transformations of a specific Euclidean conditional expectation. These results were established using the variational-calculus theory of -convergence introduced in [14] and subsequently developed by [10] by temporarily passing through the larger

-spaces. To our knowledge, these are novel proofs techniques within the field of mathematical finance and applied probability theory.

A central consequence of Theorem 4.6 is the potential to write down computable stochastic filtering equations for the dynamics of the intrinsic conditional expectation on using classical Euclidean filtering equations. Our results differed from those of [41], [16], or [18] since dynamics for an intrinsic conditional expectation are forecasted and not dynamics of the Euclidean conditional expectation of a function of a non-Euclidean signal and/or observation process. Likewise, out results did not rely on the Le Jan-Watanabe connection as those of [44] and the only computational bottleneck may be to compute the Riemannian Logarithm and Riemannian Exponential maps. However, these are readily available in many well-studied geometries not discussed in this paper, for example the hyperbolic geometry used to study the -SABR models in [30].

Many other naturally occurring spaces in mathematical finance have the required properties for the central theorems of this paper to apply. For instance the geometry of two-factor stochastic volatility models developed in [30] do. The techniques developed here can find applications to that geometry and other relevant geometries in mathematical finance and could find many other areas of applied probability theory where standard machine learning methods have been used extensively.

References

  • Allerhand and Shaked [2011] L. I. Allerhand and U. Shaked. Robust stability and stabilization of linear switched systems with dwell time. IEEE Trans. Automat. Contr., 56(2):381–386, 2011.
  • Ballmann [1995] W. Ballmann. Lectures on spaces of nonpositive curvature, volume 25 of DMV Seminar. Birkhäuser Verlag, Basel, 1995.
  • Barbaresco [2008] F. Barbaresco. Innovative tools for radar signal processing based on Cartan’s geometry of SPD matrices and information geometry. In Radar Conference, pages 1–6. IEEE, 2008.
  • Bayraktar et al. [2006] E. Bayraktar, L. Chen, and H. V. Poor. Projecting the forward rate flow onto a finite dimensional manifold. International Journal of Theoretical and Applied Finance, 9(05):777–785, 2006.
  • Best [2010] M. J. Best. Portfolio Optimization. Chapman and Hall/CRC, first edition, 2010.
  • Bhattacharya and Patrangenaru [2003] R. Bhattacharya and V. Patrangenaru. Large sample theory of intrinsic and extrinsic sample means on manifolds. I. Ann. Statist., 31(1):1–29, 2003.
  • Björk and Christensen [1999] T. Björk and B. J. Christensen. Interest rate dynamics and consistent forward rate curves. Math. Finance, 9(4):323–348, 1999.
  • Björk and Gaspar [2010] T. Björk and R. M. Gaspar. Interest rate theory and geometry. Port. Math., 67(3):321–367, 2010.
  • Bonnabel and Sepulchre [2009] S. Bonnabel and R. Sepulchre.

    Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank.

    SIAM J. Matrix Anal. Appl., 31(3):1055–1070, 2009.
  • Braides [2006] A. Braides. A handbook of -convergence. In M. Chipot and P. Quittner, editors,

    Handbook of Differential Equations: Stationary Partial Differential Equations

    , volume 3 of Handbook of Differential Equations, pages 101–213. Elsevier/North-Holland, Amsterdam, 2006.
  • Brody and Hughston [2004] D. C. Brody and L. P. Hughston. Chaos and coherence: a new framework for interest–rate modelling. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 460, pages 85–110. The Royal Society, 2004.
  • Cohen and Elliott [2015] S. N. Cohen and R. J. Elliott. Stochastic calculus and applications. Probability and its Applications. Birkhauser, New York, NY, second edition, 2015.
  • Dal Maso [1993] G. Dal Maso. An introduction to -convergence, volume 8 of Progress in Nonlinear Differential Equations and their Applications. Birkhauser Boston, Inc., Boston, MA, 1993.
  • De Giorgi [1975] E. De Giorgi. Sulla convergenza di alcune successioni d’integrali del tipo dell’area. Rend. Mat., 8:277–294, 1975.
  • DiCiccio and Efron [1996] T. J. DiCiccio and B. Efron. Bootstrap confidence intervals. Statist. Sci., 11(3):189–228, 1996.
  • Duncan [1977] T. E. Duncan. Some filtering results in Riemann manifolds. Information and Control, 35(3):182–195, 1977.
  • Dupire [1994] B. Dupire. Pricing with a Smile. Risk Magazine, 7(1):18–20, 1994.
  • Elworthy et al. [2010] K. D. Elworthy, Y. Le Jan, and X.-M. Li. The geometry of filtering. Frontiers in Mathematics. Birkhauser Verlag, Basel, 2010.
  • Filipović [2000] D. Filipović. Exponential-polynomial families and the term structure of interest rates. Bernoulli, 6(6):1081–1107, 2000.
  • Filipović [2001] D. Filipović. Consistency Problems for Heath-Jarrow-Morton Interest Rate Models, volume 1760 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2001.
  • Filipović and Teichmann [2004] D. Filipović and J. Teichmann. On the geometry of the term structure of interest rates. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 460, pages 129–167. The Royal Society, 2004.
  • Fletcher [2013] P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. Int. J. Comput. Vis., 105(2):171–185, 2013.
  • Fletcher et al. [2004] P. T. Fletcher, C. Lu, S. M. Pizer, and S. Joshi. Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE Trans. Med. Imaging, 23(8):995–1005, 2004.
  • Fletcher et al. [2006] P. T. Fletcher, S. M. Pizer, and S. C. Joshi. Shape variation of medial axis representations via principal geodesic analysis on symmetric spaces. In Statistics and analysis of shapes, Model. Simul. Sci. Eng. Technol., pages 29–59. Birkhäuser Boston, Boston, MA, 2006.
  • Han and Park [2016] C. Han and F. C. Park. A geometric GARCH framework for covariance dynamics. SSRN Preprints, 2016.
  • Han et al. [2017] C. Han, F. C. Park, and J. Kang. A geometric treatment of time varying volatilities. Rev. Quant. Finance Account., 49:1121–1141, 2017.
  • Harms et al. [2018] P. Harms, D. Stefanovits, J. Teichmann, and M. Wüthrich. Consistent Recalibration of Yield Curve Models. Math. Finance, 28(3):757–799, 2018.
  • Hauberg et al. [2013] S. Hauberg, F. Lauze, and K. S. Pedersen. Unscented Kalman filtering on Riemannian manifolds. J. Math. Imaging Vis., 46(1):103–120, 2013.
  • Henry-Labordère [2005] P. Henry-Labordère. A general asymptotic implied volatility for stochastic volatility models. ArXiv e-prints, 2005.
  • Henry-Labordère [2009] P. Henry-Labordère. Analysis, Geometry, and Modeling in Finance. Chapman & Hall/CRC Financial Mathematics Series. CRC Press, Boca Raton, FL, 2009.
  • Jost [2011] J. Jost. Riemannian Geometry and Geometric Analysis. Universitext. Springer, Heidelberg, sixth edition, 2011.
  • Kallenberg [2002] O. Kallenberg. Foundations of modern probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002.
  • Kelley [1975] J. L. Kelley. General topology. Springer-Verlag, New York-Berlin, 1975.
  • Korevaar and Schoen [1993] N. J. Korevaar and R. M. Schoen. Sobolev spaces and harmonic maps for metric space targets. Comm. Anal. Geom., 1(3-4), 1993.
  • Kratsios and Hyndman [2017] A. Kratsios and C. B. Hyndman. Arbitrage-Free Regularization. ArXiv e-prints, 2017.
  • Kratsios and Hyndman [2018] A. Kratsios and C. B. Hyndman. The NEU Meta-Algorithm for Geometric Learning with Applications in Finance. ArXiv e-prints, 2018.
  • Kratsios and Hyndman [2018] A. Kratsios and C. B. Hyndman. The NEU meta-algorithm for geometric learning with applications in finance. arXiv e-prints, 2018.
  • Markowitz [1959] H. M. Markowitz. Portfolio selection: Efficient diversification of investments. John Wiley & Sons, Inc., New York, 1959.
  • Moakher and Zéraï [2011] M. Moakher and M. Zéraï. The Riemannian geometry of the space of positive-definite matrices and its application to the regularization of positive-definite matrix-valued data. J. Math. Imaging Vis., 40(2):171–187, 2011.
  • Nash [1956] J. Nash. The imbedding problem for Riemannian manifolds. Ann. of Math. (2), 63:20–63, 1956.
  • Ng and Caines [1985] S. K. Ng and P. E. Caines. Nonlinear filtering in Riemannian manifolds. IMA J. Math. Control Inform., 2(1):25–36, 1985.
  • Pennec et al. [2006] X. Pennec, P. Fillard, and N. Ayache.

    A Riemannian framework for tensor computing.

    Int. J. Comput. Vis., 66(1):41–66, 2006.
  • Piccione and Tausk [2001] P. Piccione and D. V. Tausk. On the Banach differential structure for sets of maps on non-compact domains. Nonlinear Anal., 46(2, Ser. A: Theory Methods):245–265, 2001.
  • Said and Manton [2013] S. Said and J. H. Manton. On filtering with observation in a manifold: reduction to a classical filtering problem. SIAM J. Control Optim., 51(1):767–783, 2013.
  • Shapiro and Wilk [1965] S. S. Shapiro and M. B. Wilk. An analysis of variance test for normality: Complete samples. Biometrika, 52:591–611, 1965.
  • Smith [2005] S. T. Smith. Covariance, subspace, and intrinsic Cramer-Rao bounds. IEEE Trans. Signal Process., 53(5):1610–1630, 2005.
  • Snoussi [2013] H. Snoussi. Particle filtering on Riemannian manifolds. Application to covariance matrices tracking. In F. Nielsen and R. Bhatia, editors, Matrix information geometry, pages 427–449. Springer, Berlin, Heidelberg, 2013.

Appendix A Proofs

In this, section technical proofs or results from the main body of this section are given.

a.1 Markowitz Space Proof

Proof of Proposition 3.2.

In general, for any three Riemannian manifolds , , there is a natural bundle-isomorphism (see [31, Section 2.1] for a discussion on vector bundles). Under this identification, define the metric on as follows for each .

Let be the Levi-Civita connection on the product of two Riemannian manifolds, then . Therefore for if are geodesics on , , respectively then

whence -valued curve is a geodesic on the product Riemannian manifold. Therefore geodesics, and hence the as well as the maps can be expressed component-wise on the product Riemannian manifold. Particularizing