I Introduction
As successful forecasting of financial time series could be often turned into profit, it makes it difficult to get a better prediction for the following value that just the previous value. However, above selfregulatory mechanism of the market does not restrict prediction of probability distribution of values, what is crucial for example for risk evaluation or Monte Carlo simulations.
Standard approaches to predict probability distribution of values like ARIMA usually models this distribution as Gaussian, often of constant width: predicts some value and its inaccuracy (standard deviation). In contrast, we will model this probability distribution using large number of independent coefficients describing joint distribution as polynomial  what turns out leading to very different and more complex distribution than standardly assumed Gaussian  for example multimodal in the discussed example.
Specifically, as it is difficult to obtain a better prediction than just the previous value, we will focus on sequence of differences between two succeeding values. In discussed example it will be 3dimensional space of parameters of Yield Curves of DieboldLi model [diebold] (for fixed ), which dimensionality can be further increased for improved prediction by operating on time window: use a few previous values as context for prediction.
For convenience of fitting polynomial, we will first normalize each variable to nearly uniform distribution on
. It can be done by transforming variables with CDF (cumulative probability distribution) of approximated distribution of this variable, for which we will use Laplace distributions as it agrees well with empirical CDF (Fig. 2)Taking such normalized variables, e.g. for different parameters in given or neighboring times, if uncorrelated they would come from nearly uniform distribution on . We will model perturbation from this uniform density as linear combination of orthonormal polynomials . It makes MSE optimal estimation very inexpensive [2]: is just average over sample . Coefficients for different j are independent and have multivariate cumulantlike specific interpretation, can be used for describing statistical dependencies between tested variables.
This article extends methodology from [1]
for 1D variable (on example of Dow Jones Industrial Averages time series) into the case of multidimensional random variables.
Ii Normalization to nearly uniform density
We will discuss on example of time series of 6470 (from 1993 to 2018) daily Yield Curve parameters for .
Time series are usually normalized for example to allow assumption of stationary process: such that joint probability distribution does not change while shifting position. The standard approach, especially for Gaussian distribution, is to subtract mean value, then divide by the standard deviation. However, such normalization does not exploit local dependencies between values, what we are interested in.
Hence we will work on sequence of differences (errors, residues) from current value to its prediction based on previous values, which can be taken for example from ARIMAlike models. For simplicity we will use here the previous value as predictor: operate on sequence for where . In practical applications can be replaced with a more sophisticated predictior, for example exploiting longrange dependencies.
As shown in Fig. 2, such sequences of differences from predictor turns out to have nearly Laplace distribution:
(1) 
where maximum likelihood estimation of parameters is just: median of , mean of .
For simplicity we use Laplace distributions here to normalize variables to nearly uniform in , with separate parameters for different variables:
(2) 
where is CDF of used distribution (Laplace here).
We will search for density. To remove transformation (1) to retrieve the final density of , observe that . Differentiating over , we get .
Iii Hierarchical correlation reconstruction
After normalization we have time series with nearly uniform density of separate variables. Taking its values: as different coordinates or neighboring in time, if uncorrelated they would come from nearly uniform distribution in  difference from uniform distribution describes statistical dependencies in our time series. We will use polynomial to describe this difference: estimate joint density for neighboring values of .
Assuming we have vector sequence of neighboring values (we will discuss various possibilities later), we would like to model density of such vectors as polynomial. It turns out [2] that using orthonormal basis, which for multidimensional case can be products of 1D orthonormal polynomials, mean square (MSE, ) optimization leads to extremely simple formula for estimated coefficients:
(3) 
The basis used this way has functions. Beside inexpensive calculation, this simple approach has also very convenient property of coefficients being independent, giving each j unique value and interpretation. Independence also allows for flexibility of considered basis  instead of considering all j, we can focus on more promising ones: for example with larger absolute value of coefficient, replacing negligible . Instead of MSE optimization, we can use often preferred: likelihood maximization [3], but it requires additional iterative optimization and introduces dependencies between coefficients.
Above 1D polynomials are orthonormal in : , getting (rescaled Legendre): and for correspondingly:
They are plotted in the top of Fig. 3. corresponds to normalization. The coefficient decides about reducing or increasing the mean  have similar interpretation as expected value. Analogously
coefficient decides about focusing or spreading given variable, similarly as variance. And so on: further
have similar interpretation as cumulants, however, while reconstructing density from moments is a difficult moment problem, presented description is directly coefficients of polynomial estimating the density.
For multiple variables, describes only correlations between coordinates, does not affect coordinates, as we can see in the bottom of Fig. 3. Each coefficient has also a specific interpretations here, for example decides between increase and decrease of second variable with increase of the first, analogously decides focus or spread of the second variable.
Errors of such estimated coefficients come from approximately Gaussian distribution:
(4) 
For the integral has value 1, getting in our case. As we can see in Fig. 4, many coefficients are more that tenfold larger here: can be considered as essential, not a result of a noise.
Iv Contextfree modeling
We will work analogously to Markov modelling: model probability distribution of a new value basing on one or a few previous values  referred as context for the prediction. In contrast to standard Markov situation, values we are using are continuous: from
or or here. The number of previous values considered for prediction is referred as the order of model. Order 0 or contextfree models values as independent random variables: all from the same probability density. Order 1 is standard Markov process: uses context as one previous value to model probability distribution of the current value. Analogously for higher order models: using a few previous values as context for prediction.Let us start with basic contextfree approach: just model joint distribution of observed values, not looking for statistical dependencies with neighboring values. In discussed example dimension and for .
We could also use only a subset of such variables, e.g. for we have three possible pairs here: , and . Reducing to should lead to nearly uniform density due to the used normalization. Imperfection of e.g. assumed Laplace distribution used for this purpose will be corrected while fitting polynomial  in multidimensional case by j coefficients with for all but a given coordinate.
Top of Fig. 1 contains evaluation for fitting degree polynomial (for each variable) and 4 cases: and all 3 pairs . It shows sorted for predictions of actually observed values  the higher it is, the better prediction. Beside the used model, efficiency of such prediction strongly relies on objective statistical dependencies between these variables  for example will fail if they are uncorrelated. Surprisingly, we can see from this plot that using just first two variables gives better prediction than for all three here  the third variable is weakly correlated, more strongly with (red plot) than with (green).
Such plots evaluating prediction also allow to calibrate density plots, including interpretation for negative densities being artifact of the presented method: we can see that of cases here got negative density, hence region is expected to indicate the proper value in of cases.
Analogously we can interpret further lines of these graphs, for example we can see that the blue and orange lines in Fig. 1 have in of cases. Lower 3D density plot shows this region in : while it contains only of the volume, what would be its probability for uncorrelated variables, it contains here much more: of points from the sample.
2D regions for multiple isolines of constant for all 3 pairs are presented in Fig. 4, which alternatively can be obtained as marginalization of 3D density. It gives better visualization of strong statistical dependance between and , and much weaker with . Percentages indicate probability of cases observed in a given region, for example for for light blue regions. The missing probability is localized in further regions. Tiny black points are the actual 6469 data points  presented density region plots MSE fit degree polynomial to sum of Dirac deltas in all points of the sample.
Figure 4 also contains the largest positive coefficients (left, always starts with 1 for normalization), and negative (right). They provide unique independent cumulantlike description of statistical dependencies in modelled sample. For example largest positive is , what corresponds to parabola in first and second variable: statistical avoidance of being both near the center. Largest negative is , saying that with growth of the first variable, there comes reduction of the second.
V Contextdependent modelling
The next step is trying to exploit statistical dependencies between values neighboring in time: based on context representing the history, for example a few previous values, or extracted crucial information about the past for example in some dimensionality reduction method like PCA: corresponding to the largest eigenvalues of covariance matrix.
For simplicity and reducing dimension we will work on pairs as has much weaker correlation. We have considered one previous pair as the context : , or two previous pairs : for which is or correspondingly.
The most significant 100 coefficients for the largest considered model (, , coefficients) are presented in Fig. 5. Each is independent and has a specific meaning: correction to initially uniform density on  providing unique description of statistical dependencies in the observed data sample. For certainty that they are not just a result of random noise, here for (uniform density on ), which is exceeded a few dozens of times in this sample.
The results of this order 2 (right, coefficients) and analogously order 1 (left, coefficients) are presented in Fig. 6. Especially the order 2 model gives nearly perfect agreement: in of cases the actually observed value is in the smallest predicted region (red boundary for ). However, this is fitting million coefficient model to just 6467 data points  polynomial approaching spikes in data points.
The proper prediction evaluation should test generalization capabilities, what is presented in Fig. 7. These tests of 27 models first randomly split data sample into two disjoint subsets, use the first one to calculate coefficients, and test on the second subset. We see that the million coefficient model () in of cases gives negative density  has strong overfitting. However, focusing on predicted high density regions, it most frequently gives the proper prediction.
Finally, we see that the choice of the most appropriate model is a difficult question, it might be worth to consider a few models and somehow mix their predictions.
Vi Conclusion and further perspectives
While there is usually assumed Gaussian distribution for financial data, in reality it is often much more complicated, including multimodal distributions. There was presented basics of systematic approach for modelling such joint distribution with a polynomial  what allows to effectively find and work with parametrisation using thousands of unique and independent cumulantlike coefficients, each one having a specific interpretation, and being inexpensive to calculate.
The used example applied basic methodology for educative reasons, we plan to investigate its extensions in the future, for example:

Selective choice of basis: we have used complete basis of polynomials, what makes its size impractically large especially for high dimensions. However, usually only a small percentage of coefficients is above noise  we can selectively choose and use a sparse basis of significant values instead  describing real statistical dependencies. Alternatively, we can selectively reduce polynomial degree for some of variables.

Adaptive choice of coefficients: we have assumed that coefficients are constant in time, what corresponds to stationarity of time series. However, in practice it is often nonstationary, what can be modelled using coefficients being not average of all values of a given function like here, but some local averages instead, for example with exponentially decaying weight [3].

Longrange value prediction: combination with stateofart prediction models exploiting longrange dependencies, for example using a more sophisticated (than just the previous value) predictor of the current value.

Improving information content of context used for prediction: instead of using a few previous values as the context, we can use some features e.g. describing longrange behavior like average over a time window, or for example obtained from dimensionality reduction methods like PCA (principal component analysis).
While approach used here was analogous to Markov modelling, alternative approach to consider in the future is using time as one of coordinates, e.g. fit polynomial to triples in moving time window. It would require much lower dimension, allowing to directly model longer correlations. It also allow to work with continuous time.
References
 [1] J. Duda, “Exploiting statistical dependencies of time series with hierarchical correlation reconstruction,” arXiv preprint arXiv:1807.04119, 2018.
 [2] ——, “Rapid parametric density estimation,” arXiv preprint arXiv:1702.02144, 2017.
 [3] ——, “Hierarchical correlation reconstruction with missing data,” arXiv preprint arXiv:1804.06218, 2018.
Comments
There are no comments yet.