1 Introduction
The interdisciplinary framework of nonlinear dynamical systems, has been used extensively for the modeling of time varying phenomena in physics, chemistry, biology, economics and so forth, that they exhibit complex and irregular behavior (Ott, 2002). The apparently random and unpredictable behavior of deterministic chaotic dynamics, right from the early days of the theory, prompted to the use of random or probabilistic methods (Berliner, 1992; Chatterjee and Yilmaz, 1992). At the same time, the ubiquitous effect of the different kinds of noise in experimental or real data, reinforced the interaction between nonlinear dynamics and statistics (Mees, 2012).
When the nonlinear procedure is influenced by the uncertainty of the measurement process, the resulting timeseries can be thought of as the corruption of the true system states by observational noise. We can consider the noise in this case, as being added after the time evolution of the trajectories under consideration, thus inducing a blurring effect on the true evolution of the process. In this case the dynamics of the process are not influenced, and the invariant measure of the process is the convolution of the unperturbed measure and the noise distribution. Observational noise corrupted dynamical systems are often confronted with time delay embedding techniques and related methods (Ruelle and Takens, 1971; Abarbanel, 2012; Kantz and Schreiber, 2004).
In the case of dynamical or interactive noise, the noise is incorporated at each step of time evolution of the trajectories. For example consider a situation in which at each discrete time, the state of the system is reached with some error. Then the constructed predictive model consists of two parts, the nonlineardeterministic component and the random noise. In such cases where the noise acts as a driving force, the underlying deterministic dynamics can be drastically modified (Jaeger and Kantz, 1997), and the predictive model constitutes what is known as a random dynamical system (Arnold, 1998; Smith and Mees, 2000)
. From a modeling perspective, the existence of a stochastic forcing term can be thought of as representing the error in the assumed model, mimicking the aggregate action of variables not included in the model, compensating for a small number of degrees of freedom. In fact, when a small number of degrees of freedom is segregated from a larger coupled system, usually has as an outcome, reduced equations with deterministic and stochastic components
(He and Habib, 2013).Methods based on deterministic inference in the case of dynamical noise are inefficient, and many methods have been proposed by various researchers to address the various aspects of the problem. A theorem formulated to cope with the embedding problem for random dynamical systems is given in Muldoon et al. (1998). In Siegert, Friedrich, and Peinke (1998) and Siefert and Peinke (2004) the issue of dynamical reconstruction is addressed for stochastic systems described by Langevin type of equations under different types of random noise. Because of the different impact of the noise types, the goal of estimating the noise density directly from the data is highly significant (Heald and Stark, 2000; Strumik and Macek, 2008; Siefert et al., 2004).
The Bayesian framework (Robert, 2007) was initially put into context by Davies (Davies, 1998) for nonlinear noise reduction. Meyer and Christensen (2000, 2001) applied MCMC methods for the parametric estimation of statespace nonlinear models, extending maximum likelihoodbased methods (McSharry and Smith, 1999). Later, Smelyanskiy et al. (2005) reconstructed stochastic nonlinear dynamical models from trajectory measurements, using path integral representations for the likelihood function, extended for nonstationary systems in Luchinsky et al. (2008). Matsumoto et al. (2001) introduced a hierarchical Bayesian approach and later, Nakada et al. (2005) applied a hybrid Monte Carlo scheme for the reconstruction and prediction of nonlinear dynamical systems. More recently in Molkov et al. (2012), a Bayesian technique was proposed for predicting the qualitative behavior of random dynamical systems from timeseries.
In the literature of stochastically perturbed dynamical systems, error processes are frequently modeled via zero mean Gaussian distributions. Such an assumption, when violated, can cause inferential problems. For example when the noise process produces outlying errors the estimated variance under the normality assumption, is artificially enlarged causing poor inference for the system parameters. Alternatively, we could make the assumption of the existence of two sources of random perturbations. For example we could assume that an environmental source, caused perhaps by spatiotemporal inhomogeneities
(Strumik and Macek, 2008), is producing weak and frequent perturbations. At the same time, stronger but less frequent perturbations in the form of outlying errors, are coming from a higher dimensional deterministic component. Other cases could include systems exerting noise at random time intervals and impulsive noise (Shinde and Gupta, 1974; Middleton, 1977). These are situations where the noise probability density function does not decay in the tails like a Gaussian. Also, when the system under consideration is coupled to multiple stochastic environments the driving noise term may exhibit nonGaussian behavior, see for example references
Kanazawa et al. (2015a) and Kanazawa et al. (2015b).A number of approaches for modeling timeseries in a Bayesian nonparametric context have been proposed in the literature. For example, an infinite mixture of timeseries models has been proposed in Rodriguez and Ter Horst (2008). A Markovswitching finite mixture of independent Dirichlet process mixtures has been proposed by Taddy and Kottas (2009). More recently, Jensen and Maheu (2010) and Griffin (2010), considered Dirichlet process mixtures for stochastic volatility models in discrete and continuous time, respectively. An approach for continuous timeseries modeling based on time dependent Geometric StickBreaking process mixtures can be found in Mena, Ruggiero, and Walker (2011). For a Bayesian nonparametric nonlinear noise reduction approach see Kaloudis and Hatjispyros (2018).
Recently there has been a growing research interest for Bayesian nonparametric modeling in the context of multiple timeseries. In Fox et al. (2009) a Bayesian nonparametric model based on the Beta process was introduced in order to model dynamical behavior shared among a number of timeseries. They represented the behavioral set with an attribute list encoded by an binary matrix, with the number of timeseries and the number of features. Their approach allowed for potentially an infinite number of behaviors . This was an improvement of a similar approach of a previous work Fox et al. (2008) where the timeseries shared exactly the same set of behaviors. In NietoBarajas and Quintana (2016)
, a Bayesian nonparametric dynamic autoregressive model for the analysis of multiple timeseries was introduced. They considered an autoregressive model of order
for each of the timeseries in the collection, and a Bayesian nonparametric prior based on dependent Pólya trees. The dependent prior, with its median fixed at zero, was used for the modeling of the errors.In a previous work Merkatas, Kaloudis, and Hatjispyros (2017) we have dealt with the problem of identifying the deterministic part of a stochastic dynamical system and the estimation of the associated unknown density of dynamical perturbations, which is perhaps nonGaussian, simultaneously from data via Geometric StickBreaking (GSB) mixture processes. In this work we will attempt to generalize the socalled Geometric StickBreaking Reconstruction (GSBR) model under a multidimensional setting, in order to reconstruct and predict jointly, an arbitrary number of discrete time dynamical systems. More specifically, given a collection of noisy chaotic timeseries, we propose a Bayesian nonparametric mixture model for the joint reconstruction and prediction of dynamical equations.
Our method of joint reconstruction and prediction, is primarily based on the existence of a multivariate Bayesian nonparametric prior over the collection of the unknown dynamical noise processes. It is based on the Pairwise Dependent Geometric StickBreaking Process mixture priors developed in Hatjispyros et al. (2017), under the following assumptions:

The dynamical equations have deterministic parts that they belong to known families of functions; for example they can be polynomial or rational functions.

Apriori we assume that we have the knowledge that the noise processes corrupting dynamically the observed multiple timeseries, have possibly common characteristics; for example the error processes could reveal a similar tail behavior or (and) have common variances, or simply they come from the same noise process which is (perhaps) nonGaussian.
Our contention is that whenever there is at least one sufficiently large data set, using borrowing of strength prior specifications, we will be able to recover the dynamical process for which we have insufficient information i.e. the process for which the sample size is inadequate for an independent GSBR reconstruction and prediction.
This paper is organized as follows. In Sec. II, we are giving some preliminary notions on the GSB mixture priors applied on a single discretized random dynamical system. In Sec. III, we introduce the joint probability model for the multiple timeseries observations and we derive the PDGSBR model. We describe the associated joint nonparametric likelihood for the model, and as a special case we derive the joint parametric likelihood corresponding to the assumption of common Gaussian noise along the multiple timeseries observations. We also provide the PDGSBR based Gibbs sampler for the estimation of the unknown error processes, the control parameters, the initial conditions, and the outofsample predictions. In Sec. IV, we resort to simulation. We apply the PDGSBR model on the reconstruction and prediction of two pairs and one triple of random polynomial maps that are dynamically perturbed additively, by noise processes which are nonGaussian. Finally, conclusions and directions for future research are discussed.
2 Preliminaries
For , we consider the following assemblage of the decoupled random recurrences
(1)  
where , for some compact subsets of , and
are real random variables over some probability space
; we denote by any dependence of the deterministic map on parameters. is a nonlinear map, for simplicity continuous in the variable . We assume that the random variables are independent to each other, and independent of the states for all . In addition, we assume that the additive perturbations are identically distributed from zero mean symmetric distributions with unknown densities defined over the real line so that . Finally, notice that the lagone stochastic process formed out by the timedelayed values of the process is Markovian over .We assume that there is no observational noise. We denote the set of observations along the timeseries as and with the set of observations in the th timeseries. These are realizations of the nonlinear stochastic processes defined in (1) for some unknown initial conditions . The collection of the timeseries observations , depends solely on the initial distribution of the variable , the values of the control parameters , and the particular realization of the noise processes.
In Merkatas, Kaloudis, and Hatjispyros (2017) a Bayesian nonparametric methodology is proposed for the estimation and prediction of a single discretized random dynamical system from an observed noisy timeseries of length . It relaxes the assumption of normal perturbations by assuming that the prior over the unknown density of the additive dynamical errors, is a random infinite mixture of zero mean Gaussian kernels. More specifically apriori we set
where is a GSB random measure. The random measure is closely related to the well known Dirichlet random measure (Ferguson, 1973; Sethuraman, 1994). ’s are Dirac measures concentrated on the random precisions ’s, which in turn are independently drawn (i.i.d.) from the mean parametric distribution , being the prior guess of i.e. for measurable subsets of . The probabilityweights are stickbreaking in the sense that and and random because a beta density with mean . We define the GSB random measure as with , hence removing a hierarchy from the random measure (FuentesGarcía, Mena, and Walker, 2010). Then for , we have and . Finally we randomize the probabilityweights by letting ; then aposteriori is again beta with its parameters updated by a sufficient statistic of the data. In Merkatas, Kaloudis, and Hatjispyros (2017) it is shown that a based Bayesian nonparametric framework for dynamical system estimation is efficient, faster and less complicated when compared to Bayesian nonparametric modeling via the Dirichlet process.
To sample from the posterior of , the control parameters of the deterministic part, the initial condition and the future observations, given the noisy timeseries, in a finite number of steps we have to:

Introduce the infinite mixture allocation variables , such that P for , indicating the component of the infinite mixture the th observation came from.

Augment the random density , with the auxiliary variables , such that the
’s are identically distributed from the specific negative binomial distribution
. Then conditionally on, attains the discrete uniform distribution over the random set
.
Thereby, the dimension of the Gibbs sampler will be of order .
3 The PDGSBR model
We will model apriori the errors in the multiple recurrence relation (1) with a multivariate distribution over the space of densities. More specifically, we are interested in constructing for any finite integer
(2) 
where denotes transposition, each is a random density function, and we are able to understand the dependence mechanism between pairs for each .
We will allow pairwise dependence between any two and , so that there is a unique common component for each pair . For example consider such a dependence structure for for the random variables , and , where all the random variables are mutually independent. Then the dependence between and is created via them having in common and it is easy to show that CovVar. Therefore, the independent variables and play the rôle of common parts for the pairs and , respectively. On the other hand the independent variables and serve as idiosyncratic parts of the variables and , respectively. In a more compact notation, we set where
is the column vector of
’s, is a random symmetric matrix of independent random variables, and a matrix of ones.We will use this basic plan but instead of the real valued random vector , we have the vector of random density functions . We set . In this case is a symmetric matrix of independent random zero mean mixture densities,
is a random stochastic matrix (its row elements add up to
a.s.), and a matrix of ones. The Hadamard product of the two matrices and is defined as , whence with and . We will model the densities viawhere for the random selectionprobabilities it is that a.s., and . The random probabilityweights satisfy a.s. with
(3) 
The ’s are random geometricprobabilities with
for fixed hyperparameters
and . Then, the nonparametric prior over the error in (1) attains the representationwhere each is an infinite mixture of normal zero mean kernels via the random mixing measure i.e.
Clearly, because it is that .
We have the following:

The random infinite mixtures aposteriori given the observed timeseries, will capture common characteristics among the pairs of noise densities .

The mixtures aposteriori will be describing idiosyncratic characteristics of the noise densities .
It follows that the model of the timeseries observations conditional on the unknown initial conditions, in a hierarchical fashion, is given by
(4)  
While our method for pairwise dependent joint reconstruction and prediction can be used for dynamical systems where each state depends on the previous states , for simplicity and ease of exposition, in the sequel we will focus on the special case for all . Also, with we will denote the future unobserved observations along the multiple timeseries, and with the future unobserved observations of the th timeseries.
3.1 The nonparametric posterior
Using Bayes’ theorem, it is that
(5) 
where is the prior density over the unknown error processes , the control parameters , and the initial conditions . We define the random set that contains the selectionprobabilities , the geometricprobabilities and the infinite sequences of the locations of the GSB random measures . Clearly, we can represent as the union of ’s for , with and , , and . Because the estimation of the noise density is equivalent to the estimation of the variables in , the right hand side of equation (5) becomes
with the density given by
(6) 
For a finite dimensional Gibbs sampler, we will augment the random densities , with the following sets of variables for and :

The GSBmixture selection variables ; for an observation that comes from , selects the specific GSBmixture that the observation came from. It is that P.

The geometricslice variables , such that follows the negative binomial distribution , with P for all .

The clustering variables ; for an observation that comes from , given , allocates the component of the GSBmixture that came from. Also, given the variable follows a discrete uniform distribution over the random set .
Then the augmented Gibbs sampler will have a dimension of order .
We have the following proposition:
Proposition 1. Augmenting the random densities given in (6) with we have
(7)  
The proof is given in the Appendix A.
From now on, and until the end of this subsection, we will leave the auxiliary variables , and unspecified; especially for the ’s we use the notation where denotes the usual basis vector having its only nonzero component equal to 1 at position , and P. In fact
follows a generalized Bernoulli distribution in the
outcomes , whence(8) 
We have the following proposition:
Proposition 2.

The likelihood conditionally on , is proportional to the triple product:
(9) 
For the special case of Gaussian noise with common precision , the likelihood simplifies to:
The proof is given in the Appendix A.
The full conditionals for the PDGSBR Gibbs sampler are given in Appendix B.
4 Numerical illustrations
In this section, we will demonstrate the efficiency of the proposed PDGSBR sampler for the cases . Using mixture noise processes, with pairwise common characteristics, we will illustrate different scenarios in which, joint reconstruction can be beneficial in terms of modeling accuracy for underrepresented timeseries for which, the independent nonparametric GSBR reconstruction turns out to be problematic.
The synthetic timeseries: We will generate observations via nonGaussian quadratic and cubic autoregressive processes of order one, with chaotic deterministic parts which are given by , with , and , with , respectively.
In the sequel, we will denote by , the fact that the multiple timeseries , with respective sample sizes , has been generated via the dynamical systems with deterministic parts and noise processes distributed as .
Prior specifications: Attempting a noninformative prior specification over the geometricprobabilities, we set . Then all ’s, apriori will follow the arcsine density coinciding with the associated Jeffrey’s prior. Previously, the density of the mean measure has been set to . Here we fix the hyperparameters to . Then the prior density over the ’s, will be very close to a noninformative scaleinvariant prior. On the control parameters, and the initial condition variables, we assign the noninformative translationinvariant priors and , respectively. Although such priors are improper (they do not integrate to 1) they lead to proper full conditionals. The hyperparameters , of the Dirichlet priors over the selectionprobabilities , will be defined separately, for each numerical example.
We will model the unknown deterministic parts, via the quintic polynomials . For simplicity, we choose to sample only one outofsample point i.e. . In all cases, we have ran the PDGSBR Gibbs sampler for iterations after a burnin period of iterations.
4.1 Borrowing from a cubic to a quadratic map
For our first numerical example, we have generated multiple timeseries via
(10) 
The first timeseries has idiosyncratic noise . The density with is common for both timeseries. So that the noise components perturbing the first and second timeseries has been sampled from and , respectively. In this example as initial conditions we took .
In Fig. 1(a), we depict the perturbed cubic trajectory. It can be seen that the timeseries experiences noise induced jumps approximately from the interval , containing a chaotic attractor, to the interval , containing a chaotic repellor, see Merkatas, Kaloudis, and Hatjispyros (2017). The quadratic dynamical system experiences a noise induced escape. In Fig. 1(b), we can see that under the intense perturbations of the noise, the quadratic trajectory, escapes its deterministic invariant set , after the first 46 iterations.
Weak borrowing: To force a weak borrowing scenario apriori we set
(11) 
with . For we quantify the borrowing of information (BoI) from the cubic to the quadratic map, with the posterior mean BoI. The prior and posterior means of the matrix of the selectionprobabilities are given by
respectively. In this case, the larger data set influences quadratic estimation by BoI.
In Fig. 2(a)(f), we display in black solid curves the ergodic averages of the estimated control parameters, based on quintic polynomial modeling, under the weak prior specification in (11). We can see that the ergodic averages, based on the short timeseries, given in Fig. 2(g)(l), converge to a biased estimation.
The associated percentage absolute relative errors (PAREs), of the estimated control parameters with respect to the true values, are given in the first two lines of Table I. We can see that the estimations based on the short timeseries, exhibit large errors hindering the identification of map .
Strong borrowing: To force an apriori strong borrowing from the map to the map and at the same time to be noninformative to the selectionprobabilities of , we set
(12) 
where , denotes the uniform distribution over the interval . We have the following prior and posterior means
The prior specification (12) increases borrowing from BoI to . We remark that the posterior mean of the selectionprobabilities for the noise process of in the second row of , is much closer now to the true selectionprobabilities.
In Fig. 2(a)(f), we can see (in red solid curves) the ergodic averages of the control parameters under the strong borrowing scenario. We can see now that the ergodic averages based on the short timeseries, given in Fig. 2(g)(l), are converging fast to the true values. In the last two lines of Table I, we can see that strong borrowing reduces the average PARE of the control parameters of the short timeseries, from 2.67% to a mere 0.37% enabling the identification of the map .
Borr.  Map  

Weak  
Strong  
In Fig. 3(a)(b), we present kernel density estimations (KDEs) of the marginal noise densities based on noise predictive samples, under weak and strong prior specifications in black and red, respectively. True noise densities are represented by solid blue curves.
In Fig. 3(c)(d), we display predictive based KDEs of the marginal posteriors of the initial conditions and . The estimations under the two prior configurations are nearly indistinguishable.
In Fig. 3(e)(f), we present predictive based KDEs of the marginal posteriors of the outofsample variables and . True future values are represented in vertical dotted blue lines. We can see how more accurate is the estimation of the predictive density of the first future observation based on the short timeseries, lying outside the invariant set, under the strong borrowing prior (solid red curve) in Fig. 3(f).
4.2 Borrowing between two cubic maps
Here we have generated a pair of cubic timeseries via
(13) 
The mixture with , is playing the rôle of the common noise process. In this example as initial conditions we took .
The perturbed cubic trajectories are depicted in Fig. 4(a)(b). It can be seen, that both timeseries experience noise induced jumps.
Weak borrowing: Using the weak prior configuration given in (11), the posterior mean of the matrix of the selectionprobabilities is approximated by
In this case the large cubic timeseries influences the short cubic timeseries by only BoI.
In Fig. 5(a)(f), we display in solid black curves the ergodic averages of the estimated control parameters, under the weak prior specification (11). We can see that the ergodic averages based on the short timeseries, given in Fig. 5(g)(l), exhibit slow convergence.
The associated PAREs of the estimated control parameters with respect to the true values, are given in the first two lines of Table II. We can see that the estimations based on the short timeseries exhibit large errors, hindering the identification of map .
Strong borrowing: To force an apriori strong borrowing from the map to the map , we set
(14) 
We have the following prior and posterior means
The prior specification (14), increases borrowing considerably from 10% to . We remark that the posterior mean of the selectionprobabilities for the noise process of in the second row of , is close to the true selectionprobabilities.
In Fig. 5(a)(f), we can see in solid red curves, the ergodic averages of the control parameters under the strong borrowing scenario. We can see now that the ergodic averages based on the short timeseries, given in Fig. 5(g)(l), are converging fast to the true values. In the last two lines of Table II, we can see that the strong borrowing reduces the average PARE of the control parameters of the short cubic timeseries, from 1.14% to a mere 0.10%, thus enabling identification of the map .
Borr.  Map  

Weak  
Strong  
In Fig. 6(a)(b), we present the KDEs of the marginal noise densities based on the noise predictive samples, under weak and strong prior specifications, in black and red solid curves, respectively. True noise densities are represented by solid blue curves. We remark the similarity of the estimated noise densities under the strong prior configuration (14).
In Fig. 6(c)(d), we display the predictive based KDEs of the marginal posteriors of the initial points and . The estimated marginal posterior density of the variable , under the weak borrowing prior has five modes. The two spurious modes, disappear after the introduction of strong borrowing (solid curve in red). We remark that the three modes are very close to the three real roots of the polynomial equation .
In Fig. 6(e)(f), we present the predictive based KDEs of the marginal posteriors of the first outofsample variables and . The point estimations of the first outofsample values are of the same quality, yet, under strong borrowing the predictive density associated with the short timeseries cubic map exhibits a 95% highest posterior density interval (HPDI) shrinkage factor of 0.45. Namely, the weak borrowing HPDI of the variable shrinks to the strong borrowing HPDI .
4.3 Borrowing between three quadratic maps
For this example we have generated a multiple perturbed quadratic timeseries via
(15) 
The first two timeseries have the common part and no idiosyncratic parts. The second and third timeseries have the common part . The third timeseries has idiosyncratic part . In this example as initial conditions we took and .
The three perturbed quadratic trajectories are displayed in Fig. 7(a)(c).
Weak borrowing: To force a weak borrowing scenario, apriori we set for
(16) 
with prior mean . Then the posterior mean after stationarity, is approximated by
For , we quantify the borrowing of information from the first and third timeseries to the second, with the posterior mean BoI. Because, BoI, the two large timeseries have a very small effect on the central short timeseries.
In Fig. 8(a)(r), we display in solid black curves the ergodic averages of the estimated control parameters, under the weak prior specification (16). We can see that the ergodic chains, based on the short timeseries, Fig. 8(g)(l), exhibit serious mixing issues.
The associated PAREs of the estimated control parameters with respect to the true values, are given in the first three lines of Table III. We can see that the estimations based on the short timeseries , exhibit large errors, hindering the identification of the map . This situation can be corrected by the introduction of a strong borrowing prior configuration.
Strong borrowing: To force an apriori strong borrowing from the maps and to the map , and at the same time to be noninformative on the selectionprobabilities of , we set for
(17) 
where . We have the following prior and posterior means
and
The prior specification (17) increases borrowing from BoI to . We remark how close is the posterior mean of the selectionprobabilities for the noise process of , in the second row of , to the true selection probabilities in (15).
In Fig. 8(a)(r), we can see in solid red curves the ergodic averages of the control parameters, under the strong borrowing scenario. We can see now that the ergodic averages based on the short timeseries, given in Fig. 8(g)(l), are converging fast to the true values. In the last three lines of Table III, we can see that strong borrowing reduces the average PARE of the control parameters of the short timeseries, from 12.87% to a mere 0.17% enabling the identification of the map .
Borr.  Map  

Weak  
Strong 


In Fig. 9(a)(c), we display the KDEs based on the marginal noise predictive samples. In Fig. 9(d)(f), we display the KDEs based on the marginal initial conditions variable samples. In Fig. 9(g)(i), we exhibit the KDEs based on the marginal posterior samples of the first outofsample variables. Black and red solid curves, refer to weak and strong borrowing priors, respectively. In Fig. 9(b), we have superimposed the noise predictives coming from the weak and strong borrowing scenarios together with the true density of the noise component, given by
, in black, red and blue solid curves, respectively. We note how close to the true noise density, is the density estimated under strong borrowing. In Fig. 9(e), the KDE based on the marginal posterior predictive of the initial condition sample under the strong borrowing prior, has its modes very close to
and 1. In Fig. 9(h), the estimation of the first outofsample value under the strong borrowing prior, exhibits a shrinked 95% HPDI at .5 Conclusions
We have proposed a new Bayesian nonparametric model, for the joint pairwise dependent reconstruction of dynamical equations, based on observed chaotic timeseries data contaminated by dynamical noise. Also, we have introduced a joint parametric Gibbs sampler. In this case the dynamical noise is assumed to be Gaussian, coming from the same noise source for each timeseries. Then borrowing of strength, comes from the full conditional of the common precision.
Our numerical experiments, are indicating, that when the densities of the noise processes have common characteristics, underrepresented time series for which an independent Bayesian nonparametric estimation is problematic, can benefit in terms of model estimation accuracy. This can be done by imposing strong borrowing prior specifications between the selectionprobabilities of the noise processes of the short timeseries, and the timeseries with an adequate number of observations for independent Bayesian nonparametric estimation.
Our model can be generalized to include all possible dependencies between the components of the noise processes. For example, consider the set of the first natural numbers, except , for . We define the set, , of combinations without replacement, over the set of symbols , at a time, with , and . Now to each combination , we add the symbol , and order the resulting sequence of numbers to , we set . The set contains the indexes of all possible interactions of the noise process of order . Then the nonparametric prior over the th noise process can be written as , with
Comments
There are no comments yet.