Optimal model-free prediction from multivariate time series

06/18/2015 ∙ by Jakob Runge, et al. ∙ 0

Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal pre-selection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used sub-optimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Predicting the future behavior of complex systems from measured time series constitutes a major goal in many fields of science. Traditionally, this problem has been mainly addressed in terms of model-based approaches, often using linear models Brockwell2002 . As an alternative, since the late 1980s also model-free predictions have been developed using nearest neighbors in state space Sidorowich1987 ; Abarbanel1990 ; Giona1991 ; Alparslan1998 ; Pompe2001 ; Ragwitz2002

or neural networks

Eisenstein1995 ; Szpiro1997 ; Small2002 . In the nearest-neighbor technique, states similar to the present state are searched for in the past of a time series and a future value at a prediction step is forecasted by simply averaging the values -steps ahead of the nearby past states or using local-linear models Sidorowich1987 . Model-free predictions have so far been mostly univariate where states are usually reconstructed from embedding a single time series using Taken‘s theorem Takens1981 ; Sauer1991 ; Ragwitz2002 . However, the intertwined nature of complex systems calls for multivariate approaches taking into account more available information. Now the problem is that the curse of dimensionality makes useful nearest-neighbor predictions almost impossible for more than a few predictors Hastie2009 , especially if many of these predictors carry redundant information.

From an information-theoretic perspective, the minimal set of variables that maximizes the (multivariate) mutual information Cover2006 with a target variable is most predictive Groth2001 ; Pompe2001 . Minimality is required to avoid the curse of dimensionality. It is important to note that this set of variables can be different from those with individually large mutual information with the target variable. Indeed, sometimes the right combination of predictors matters. For example, if is driven multiplicatively by , the mutual information of each of these predictors with can be very low and only the mutual information of the combined set with is very high. In general, such synergetic sets can only be detected by searching through all subsets of variables. However, the number of possible combinations for taking into account more variables and larger time lags grows exponentially making such a search strategy prohibitive due to computational constraints.

Therefore, simple search strategies such as ranking the predictors by their mutual information with a target variable or the CMI-forward selection using conditional mutual information (CMI) have been proposed recently Kugiumtzis2013 . Here we demonstrate that such approaches can fail already in simple cases where one cannot avoid to test different subsets of predictors. However, we information-theoretically prove that the search can be restricted to causal

drivers. To obtain these drivers, we exploit a recently developed model-free algorithm to consistently reconstruct causal drivers from multivariate time series that keeps the estimation dimension as low as possible with typically low computational complexity

Runge2012prl . The much smaller set of causal drivers then allows for a globally optimizing search strategy which is optimal also for synergetic cases. In this contribution, we additionally provide a practical criterion for selecting the optimal size of the subset of predictors which compares well even with computationally expensive cross-validation approaches. In numerical experiments we found that our optimal scheme yields much improved predictions and often even runs faster than forward selection.

Our method suggests a general framework not only for prediction, but also for general statistical inference problems for datasets (not only time series) where the underlying mechanisms are poorly understood: Firstly, the optimal model-free approach can be applied for selecting not only causal, but also possibly synergetic driving variables and, secondly, these variables can be used to fit a model to learn the functional form of the dependencies on these causal predictors. This approach combines the advantage of a model-free approach to detect relevant variables with the advantage of model-based methods to efficiently harness these variables to further improve predictions or understand mechanisms. Our framework is illustrated on a sea-surface temperature based index of the El Niño Southern Oscillation (ENSO) in the tropical Pacific.

This paper is organized as follows. After deriving the optimality of causal predictors in Sect. II, we discuss common approaches for information-theoretic variable selection for predictions in Sect. III. The optimal scheme is explained in Sect. IV including the causal pre-selection algorithm and selection criteria. Section V discusses the computational complexity of the different schemes. In Sect. VI we compare our scheme with other approaches in a model example. Extensive numerical experiments on multivariate nonlinear stochastic delay processes are conducted in Sect. VII. In Sect. VIII we analyze the combination of the model-free selection with a model-based prediction scheme which is applied in Sect. IX to predict future values of the considered ENSO index.

Ii Optimal prediction

The discrete-time evolution of a subprocess of a multivariate -dimensional stochastic process can be described by

(1)

with some function , where are the deterministically driving variables including the past of at possibly different time lags and represents stochastic noise driving . The uncertainty about the outcome of can be quantified by the Shannon entropy Cover2006 which decomposes into

(2)

where the latter term is the source entropy Shannon1948 ; Pompe2011 . This conditional entropy quantifies the minimum level of uncertainty that cannot be predicted even if the whole past (and present) is known. If the dependency of on the noise term is additive, the source entropy equals the entropy of the noise: . The infinite-dimensional multivariate mutual information (MMI) , on the other hand, quantifies the predictable part by measuring by how much the uncertainty about the outcome of can be maximally reduced if was perfectly measured.

In practice, a prediction using the entire set (truncated at some maximal lag) would severely suffer from the curse of dimensionality and overfitting Hastie2009 , which means that many variables do not actually carry useful information, but merely fit the noise in the time series, and the prediction – trained on a learning set – would perform poorly on a test set. The goal is, thus, to select a small subset of predictors from that carries a maximum of information about and still generalizes well on new data. However, for an -variate process truncated to a maximum delay the number of possible subsets grows exponentially in and .

To avoid this combinatorial explosion, simple search strategies such as ranking the individual predictors by their mutual information (MI-scheme) with the target variable can be used which, however, is prone to include redundant variables that do not improve a prediction. Forward selection, a more advanced technique, iteratively determines predictors based on how much information they contain additionally to the already chosen variables using conditional mutual information (CMI-scheme) Kugiumtzis2013 leading to a polynomial computational complexity. These strategies will be discussed in Sect. III. But forward selection is not a globally optimal strategy, one reason being that it might select variables that are not deterministically driving , the other that it fails to detect synergetic cases as demonstrated in our model example in Sect. VI.

The unknown deterministic drivers in Eq. (1) are, however, key to arrive at optimal predictors as can be shown by decomposing the MMI in Eq. (2

) using the chain rule

Cover2006

(3)

where denotes the set subtraction. The second term is zero for processes satisfying the Markov property which states that is independent of the remaining past given its causal parents , a term that originates from the theory of graphical models lauritzen1996graphical . For multivariate time series these are called time series graphs Eichler2011 ; Runge2012prl ; Runge2014a . This proves that, theoretically, all information is already contained in the causal parents. Adding a variable from would not increase the information, but removing a parent from would decrease the MMI in Eq. (II). The causal parents can be efficiently estimated by the algorithm described in Sect. IV.1. The Markovity rests on the assumption that the noise term driving is independent of the noise terms driving the other variables. While this assumption is not strictly fulfilled in many real world systems, it often at least approximately holds. This assumption is also not as crucial for the prediction task as it is for the causal inference problem.

However, for finite time series, some predictors in , even though causal, could only be weakly driving and lead to overfitting since they do not generalize well on new data. It is, therefore, crucial to optimize the selection of a minimal subset of causal parents. Since the set of causal parents is much smaller than , this can now be done using a global optimization strategy. In Sect. IV.2 we present such a strategy to select the optimal subset of causal predictors.

Figure 1: (Color online) Optimal prediction scheme. (i) Causal pre-selection in an example time series graph (see text) for predicting . Even though could have a high mutual information with , its influence is only indirect through the parents and of . Among these the latter already lies in the unobserved future, but part of its information can still be recovered by measuring and which share information along the paths marked with black arrows. These variables form the causal predictors (blue boxes) for , which can be found by determining the Markov set. A suitable algorithm for this task will be discussed in Sect. IV.1. All paths from nodes further in the past to have to pass through this set. (ii) Selection of optimal subset from causal predictors. For all numbers of predictors , the multivariate mutual information of all possible subsets is estimated. (iii) The optimal predictors are those where the estimated MMI takes its maximum or can be obtained from cross-validation. For example, if the optimal predictors were , only time series samples (blue shaded) of these predictors are used with a nearest-neighbor (Sec. IV C) or model-based (Sec. VIII) prediction scheme to forecast the future value t+h of the time series of Y.

For predictions steps into the future, the set of causal predictors for is not identical with the parents anymore as for because predictors can only be chosen among the observed variables prior to . Still the same algorithm as for the causal parents can be used to obtain the set of variables that separates from in the time series graph. In Fig. 1(i) an example of such a graph is given. As defined in Runge2012prl , each node in that graph represents a subprocess of a multivariate discrete time process at a certain time . Nodes are connected by a directed link if they are not independent conditionally on the past of the whole process, which implies a lag-specific Granger causality with respect to Eichler2011 . Using these causal predictors, the only uncertainty left comes from the source entropy of and the entropy from the unobserved ancestors of between and (see Fig. 1(i)).

In the following sections we discuss and numerically compare the four prediction schemes mentioned above: (1) MI selection, (2) CMI-forward selection, (3) CMI-forward selection of only causal predictors, and (4) our optimal scheme.

Iii MI and CMI prediction schemes

In the first prediction scheme, MI-selection, the respective MI of each variable in up to a maximum lag with the target variable is estimated. Then the predictors are ranked by their MI: . To determine the best number

of the ranked predictors that should be used, one can either apply a heuristic criterion or

cross-validation Hastie2009 . In the model experiments in Sects. VI and VII we evaluate both approaches, employing the heuristic criterion that the MI of the ranked predictor should be at least a fraction of the MI of the previous predictor , i.e., with . The last of the ranked predictors that satisfies this criterion determines the selected number of predictors. This scheme has the drawback that two or more predictors with high MI with the target variable might contain highly redundant information. Then the inclusion of redundant predictors leads to overfitting which will be discussed in Sect. VI.

The second prediction scheme, CMI-forward selection, overcomes this drawback by excluding information already contained in the previous predictors Kugiumtzis2013 : First the MIs for all are estimated. The first predictor is the one that maximizes the MI with the target variable (i.e., the same one as in the MI-selection scheme). The next predictor , however, is chosen according to the maximal CMI among all remaining predictors, the third predictor is the maximum CMI conditional on the two previously selected predictors, etc. In each step , the CMI gives the gain to the MMI if this predictor is included because

(4)

Here the heuristic criterion Kugiumtzis2013 is to select as the best the last with at least a gain of

(5)

where as before. In Kugiumtzis2013 also an adaptive choice of using a shuffle test is discussed. This scheme has been proposed to infer causal drivers in Kugiumtzis2013 . However, it can be shown to fail for this task already in simple cases which will be shown in Sect. VI.

Rather than with a heuristic criterion, at the cost of additional computation time, the best number of predictors can also be chosen by cross-validation. Here we use an -fold cross-validation where the available observed set of time indices is partitioned into complementary segments. For each validation round, a fold is retained as the validation set on which the prediction performance is evaluated. The nearest-neighbors are searched for in the complementary set from which also the prediction estimate is generated. Then the number of predictors where the average prediction error across all folds is minimal is chosen.

Iv Optimal prediction scheme

For the prediction of given the multivariate time series , our proposed optimal prediction scheme consists of the following steps (Fig. 1): (i) Estimate the causal predictors using the causal algorithm described in the next Sect. IV.1, (ii) check all subsets (except the empty one) and select the causal predictors with the maximal estimate of the MMI with the target variable as the optimal ones (Sect. IV.2), (iii) use these predictors to forecast the target variable with nearest-neighbor prediction (Sect. IV.3). In the following, we explain the causal pre-selection algorithm and discuss criteria for selecting the optimal subset. While here the actual prediction is conducted using a nearest-neighbor scheme Sidorowich1987 , in Sect. VIII we will also discuss how a model-based prediction based on the inferred optimal predictors can further improve a forecast.

iv.1 Causal pre-selection algorithm

The causal pre-selection algorithm is a modification of the algorithm introduced in Runge2012prl , which is based on the PC algorithm Spirtes1991 ; Spirtes2000 (named after its inventors Peter Spirtes and Clark Glymour). The main idea is to iteratively unveil the causal predictors by testing for independence between pairs of nodes in the time series graph conditional on a subset of the remaining nodes. Since these conditions are efficiently chosen, the dimension stays as low as possible in every iteration step. This important feature helps to alleviate the curse of dimensionality in estimating CMIs Runge2012prl affecting the computation time as well as the reliability of conditional independence tests. Under some mild assumptions discussed below, the algorithm yields a consistent estimate of . Instead of the commonly used binning estimators where the curse of dimensionality is especially severe, here we utilize an advanced nearest-neighbor estimator Frenzel2007 that is most suitable for variables taking on a continuous range of values. This estimator has as a free parameter the number of nearest neighbors which determines the size of hyper-cubes around each (high-dimensional) sample point. Small values of

lead to a lower estimation bias but higher variance and vice versa. Therefore, we choose different values in the algorithm (

) and the subsequent selection schemes (). Note that for an estimation from (multivariate) time series stationarity is required.

The algorithm starts with no a priori knowledge about the drivers and iteratively learns the set of predictors of : First, estimate unconditional dependencies and initialize the preliminary predictors . This set contains also non-causal predictors which are now iteratively removed by testing whether the dependence between and each conditioned on the incrementally increased subset vanishes:

  1. Iterate over increasing number of conditions, starting with some :

    1. Iterate through all combinations of picking nodes from to define the conditions in this step, and estimate for all . After each step the nodes with are removed from and the iteration over stops if all possible combinations have been tested. (In the implementation we limit the number of combinations and check relevant combinations first, see Runge2012prl ; Runge2014d for details.)

    If the cardinality , the algorithm converges, else, increase by one and iterate again. (In the implementation we limit the dimensionality up to some . If the initial number of conditions is to speed up the algorithm, also previously skipped combinations with need to be checked before convergence can be assessed.)

The main assumptions underlying the identification of the conditional independence structure with the PC algorithm are the Causal Markov Condition, i.e., Markovity of the process of any finite order, and Faithfulness, which guarantees that the graph entails all conditional independence relations true for the underlying process and can be violated only in certain rather pathological cases Spirtes2000

. If these assumptions are fulfilled, the causal algorithm is universally consistent, implying that the algorithm will converge to the true causal predictors with probability 1 for infinite sample size

Spirtes2000 . On the other hand, the CMI forward selection algorithm proposed for causal inference in Kugiumtzis2013 is not consistent since it yields non-causal drivers already in simple cases which will be analyzed in Sect. VI. But the CMI forward selection scheme can be ‘repaired’ by a further backward elimination step Sun2014 .

Practically, the causal algorithm involves conditional independence tests for . In Runge2012prl a shuffle test is proposed for testing whether : An ensemble of values of is generated where is a shuffled sample of , i.e., with the indices permuted. Then the CMI values are sorted and for a test at a given -level, the -th value is taken as a significance threshold. In Runge2012prl a numerical study on the detection and false positive rates of the algorithm are given. The shuffle test comes at the additional cost, that for each conditional independence test surrogates of CMI have to be estimated. An alternative is to apply a fixed threshold , which has the drawback that it does not adapt to the negative bias for higher-dimensional CMIs Runge2012b ; Runge2014d . The algorithm yields different numbers of predictors for different chosen fixed thresholds or significance levels and the value should be low enough to include weak but possibly synergetic causal predictors, but high enough to limit computational complexity in the optimization step (Sect. V).

iv.2 Optimal selection criteria

Figure 2: (Color online) Multivariate mutual information (MMI, bottom red bars) and standardized root mean squared prediction error (value proportional to lower end of grey bars at top) for all subsets of causal predictors for model (VI), for details see Sect. VI. For each number of predictors , the number of possible combinations varies according to the binomial coefficient . The predictor combinations are sorted by their prediction error. The maximum of MMI and also smaller values match the minimum prediction error very well. Note that MMI is estimated on the learning set while the prediction error is evaluated out-of-sample on the test set indicating that the bias of higher-dimensional MMIs here serves as a good proxy for overfitting as discussed in the text.

Once the causal predictors are determined, the optimal subset needs to be chosen (possibly containing all causal predictors). Here we also discuss a scheme using forward selection on the causal drivers (causal CMI-scheme).

For the third prediction scheme, causal CMI-selection, the forward selection ranking discussed in Sect. III is applied not to the entire set , but only to the pre-selected causal predictors . Also here, the same heuristic criterion as for the non-causal forward selection or cross-validation can be used.

For the optimal scheme (Fig. 1(ii)), the MMI for all subsets of the causal predictors from is estimated. In Fig. 2 the MMI values (ensemble average) are plotted for the model example discussed later in Sect. VI with causal predictors. Even though all seven predictors are causal drivers, the estimated MMI is highest for just three of them and even decreases for more predictors (according to Eq. (II), the theoretical MMI should be maximal for seven predictors). The reason is that the estimated MMI is negatively biased for higher dimensions Kraskov2004a ; Runge2012b ; Runge2014d and if the additional information contained in the predictors does not outweigh this bias, the MMI decreases. The bias of the MMI estimator, therefore, implies a penalty that avoids overfitting. In our heuristic criterion we exploit this property and simply select the subset with maximal estimated . This model-free data-based criterion could be seen as an analogue to model-based criteria like the Akaike Information Criterion (AIC) Akaike1974 where the penalty is derived from some measure of model complexity, but our criterion needs to be further investigated. We choose as the nearest-neighbor parameter in the MMI estimator , where is the nearest-neighbor parameter in the prediction (see next Sect. IV.3).

In our numerical experiments in Sect. VII, we additionally test a combination of the MMI-criterion with cross-validation: For each number of predictors we check the MMI-criterion to select the optimal combination and then use cross-validation to pick the optimal . This approach gives always a slightly better prediction performance than using the maximum criterion alone – at the cost of much longer computation time. Asymptotically, for model-based predictions the AIC criterion is equivalent to cross-validation and in our numerical experiments (see appendix Fig. 9) we also find that our heuristic MMI-criterion well matches the cross-validation choice for longer time series. Note that our use of cross-validation treats as a tuning parameter as is typically done Hastie2009 . One could also treat the choice of a subset as a tuning parameter and run steps (i)–(iii) of the prediction scheme for every fold. is, however, not a numeric ‘tuning parameter’ and different folds in the cross-validation might not contain the same subsets. As a result the variance across the folds is considerable and it is hard to find the subset with minimal cross-validation error.

iv.3 Nearest-neighbor prediction

Once the optimal predictors are selected, the actual prediction is conducted here using a scheme with a fixed number of nearest neighbors Sidorowich1987 : For the optimal set of predictors , we first determine the distances

(6)

where denotes some norm. Here we apply the maximum norm as in the nearest-neighbor estimator of the (conditional) mutual information Frenzel2007 . is the maximum lag used to estimate . Next, we sort the distances in increasing order yielding an index sequence . Now there are two approaches to use these distances: (i) A fixed distance is chosen and all points with distance smaller than are taken into account to predict . Then the coarse-graining level is consistent for all sample points, but sometimes there might not be any point within a distance (Groth2001, ; Pompe2001, ). (ii) Here we use a fixed number of nearest neighbors which has the advantage that the same number of points contributes to a prediction making the estimate more reliable. For a chosen fixed number of nearest neighbors the future value

is then estimated by the conditional expectation and its prediction interval by its standard deviation:

(7)

Another option, instead of the expectation, is to fit an autoregressive model giving a

local-linear prediction (Sidorowich1987, ). The summation can also be weighted with a function of the distance of the neighbors, different norms, or kernel-based methods can be chosen Hastie2009 .

For the number of nearest neighbors , we use where is the nearest-neighbor parameter in the estimation of CMI or MMI in the selection schemes. This choice approximately yields a consistent level of coarse-graining in the information-theoretic selection step and the nearest-neighbor prediction. Alternatively, at the cost of computation power, one can also utilize cross-validation for this choice Hastie2009 . The number of nearest neighbors needs to be balanced to guarantee that only nearby values are used as predictors, but still enough values are available to confidently estimate and possibly the prediction interval. The value will typically strongly depend on the data. As a skill metric we compute the standardized root mean squared error

(8)

where is the variance of in the testing set .

V Computational complexity

Figure 3: (Color online) Scaling of computational complexity with the cardinality of the estimated causal predictors for . Solid lines will be used for , whereas dashed lines denote . The curve for the causal CMI-selection scheme (black) is only slightly different from the constant curve for the MI-selection scheme (blue). The causal schemes have the advantage that their complexity rather depends on how many causal drivers are found, while the complexity of the MI- and CMI-selection schemes depends on the number of processes and the maximum lag. For the non-causal CMI-selection scheme (grey lines) we fixed the maximum number of predictors to . To include the complexity due to the preselection step in the causal schemes we added () for () for these schemes according to Eq. (9) for and as in our numerical experiments (see Sect. VII). The plot shows that if the number of causal predictors can be reduced below () here, the optimal scheme even takes less computation time than the non-causal CMI-selection scheme.

An important criterion for the practical applicability of the different prediction schemes discussed in Sects. III and IV is their respective computational complexity. The estimator for the (C)MI employed here (nearest-neighbor technique with maximum norm Frenzel2007 ) has a computational complexity of Kraskov2004a , where is the time series length and the dimensionality of the respective variable. Fast neighbor searching algorithms can further reduce the dependency on , but here we are interested in the relative complexity of the different predictor selection schemes and, therefore, only consider the linear scaling with the number of dimensions. In this case the first prediction scheme, MI-selection, clearly is the cheapest option. For a -dimensional process , this procedure involves just estimates of MIs with a dimensionality of .

The second scheme, CMI-forward selection, is more demanding the more possible predictors are included. For cross-validation, a maximum number of predictors has to be selected, increasing the dimension to maximally due to more conditions in Eq. (III). The CMI-forward selection then involves

estimates of CMI with iteratively increasing dimensionality. Then the complexity of the CMI-selection scheme scales as

for , i.e., with a high linear dependency on and a polynomial dependency . Note that for a -fold cross-validation step using nearest-neighbor prediction an additional computational complexity of

has to be added.

In Fig. 3 we compare the complexity of the different prediction schemes for and as in our numerical experiments in Sect. VII. While one can fix to a small number for which nearest-neighbor predictions yield acceptable results, the main problem here is that the computation time quickly increases with (linear, but with a large pre-factor). One advantage of a causal pre-selection step is to reduce this number before the computationally expensive selection procedure is invoked.

Since typically the set of causal predictors is small, i.e., , the third prediction scheme, causal CMI-selection, has a drastically smaller computational complexity than the non-causal CMI-selection scheme if the additional complexity due to the causal algorithm is not that large. If we rank all causal predictors (not limiting to some as in the non-causal scheme), the complexity scales as where denotes the number of causal predictors. In Fig. 3, the complexity (black lines) shows only a very moderate increase with .

For the optimal scheme the computational complexity grows exponentially as . However, Fig. 3 shows that if the number of causal predictors can be restricted, the optimal scheme even takes less computation time than the non-causal CMI-selection scheme. The number of causal predictors can be reduced by adjusting the significance level or fixed threshold in the conditional independence tests of the causal pre-selection algorithm. Most important, the causal scheme’s complexity only scales with the number of causal drivers and not directly with the number of processes or the maximal lag as the non-causal schemes (dashed lines in Fig. 3). The dependence of the causal schemes on and the maximal lag is only via the algorithm.

The additional time complexity of the causal algorithm varies with the graph structure. The number of iterations can be limited by starting with a higher number of initial conditions and limiting the maximum dimensionality . This number determines up to which dimension of the conditional independence is checked. Also the number of combinations in the -loop can be restricted. In a worst case scenario where the spurious links only vanish if the maximum number of conditions is used, the computational complexity scales as

However, for sparse graphs and the conditional set being efficiently chosen Runge2012prl , typically links get removed already for an -dimensional condition with a complexity of

(9)

and or . This is also confirmed in numerical experiments in Runge2012prl and Sect. VII. Often the complexity is even lower because the MI value in the first iteration is already non-significant.

Vi Model example

Figure 4: (Color online) Comparison of prediction schemes for an ensemble of realizations of model (VI) for with time series graph given in the inset (learning set length , test set length , nearest-neighbor prediction with neighbors, (C)MIs estimated from the learning set with parameter in the algorithm and for the optimization). The box and whiskers plots give the ensemble median and the interquartile range of the standardized root mean squared prediction errors in the test set for each iteration step in the four schemes (from left to right: MI, CMI, causal-CMI, optimal scheme). The black line gives the median of the true minimal prediction error obtained by minimizing the out-of-sample prediction error for each number of predictors taken from the true causal drivers. For , only the optimal scheme (red) selects the best (synergetic) predictors and reaches the minimum possible error while the causal forward selection (black) first picks one of the less predictive and the pure forward selection (grey) and MI-based schemes first pick the two non-causal drivers . The non-optimal schemes include the synergetic predictors only when the higher dimensionality is already worsening the prediction due to overfitting.

The following nonlinear discrete-time stochastic delay equation provides an illustrative example where a simple MI- or CMI-forward selection procedure yields non-causal variables that deteriorate a prediction. Consider

(10)

where the causal drivers , , and are independent Gaussian processes with zero mean and unit variance []. This model illustrates why the MI and CMI prediction schemes fail to yield good predictions due to (1) selecting non-causal predictors and (2) missing out synergetic predictors. Here the are synergetic causal drivers, which – for certain parameters – are individually less predictive than the drivers . But selected all together, their combined information is much larger than the single mutual informations. This synergetic effect can only be detected if all subsets of causal drivers are tested in a globally optimal scheme. In the following we analyze why the MI- and CMI-selection schemes fail to provide good predictions for such cases.

Regarding the problem of selecting non-causal drivers, for certain parameter combinations of the mutual information is larger than any or for all . The non-causal schemes based on iteratively selecting predictors with maximal MI or CMI (blue and grey box plots in Fig. 4) will, therefore, choose a non-causal prior to the true causal predictors and . Since the predictors have the largest MI after the , these are included next in the MI-selection scheme. In the CMI-forward selection scheme, on the other hand, the synergetic variables are selected after the second iteration step. This leads to a drastic decrease in the prediction error at (grey box plot in Fig. 4). The problem is that now the dimension of the predictors is already and the two spuriously causal variables deteriorate the prediction.

The causal pre-selection avoids this pitfall. The black and red box plots in Fig. 4 denote the schemes based on causal predictors using forward selection (black) and the optimal scheme (red). Also for causal drivers the forward selection scheme is not optimal because the selection of one predictor at a time fails for synergetic cases and selects the weak drivers prior to the synergetic drivers . Only the optimal scheme correctly identifies these drivers for the dimension and reaches the minimal prediction error possible for this model (black line) for each . In Fig. 2 we show that the three synergetic drivers are chosen with our heuristic optimal criterion since they have the largest MMI with the target variable.

Vii Numerical experiments

Figure 5: (Color online) Results of numerical experiments for an ensemble of trials of the synergetic model class (VII) with time series length ( in the test set) for the four different prediction schemes (from left to right in the panels: MI in blue, CMI in gray, causal-CMI in black, optimal in red). (a) The four box plots show the ensemble interquartile range of (i) computational complexity (the green box on the right shows the added complexity due to the causal algorithm), (ii) the range of numbers of predictors selected by cross-validation (CV, here the green bar shows the number of causal predictors in the pre-selection step), (iii) the true positive rate (TPR) and (iv) false discovery rate (FDR). The latter are evaluated for each scheme at , corresponding to the true number of causal drivers (or less if fewer causal drivers are estimated in the pre-selection step). (b) Box plots showing the median and interquartile range of the prediction error relative to the true minimal error obtained by minimizing the out-of-sample prediction error over all subsets of true causal drivers. On the left are the results if cross-validation is used to optimize

for each scheme (whiskers show the 5% and 95% quantiles). The red box in the center shows the result for the heuristic optimality criterion. The range of boxes on the right shows the results for different thresholds

for the other schemes (only interquartile range). (c) Box and whiskers plots (showing the 5% and 95% quantiles) for the absolute prediction error of the optimal scheme at the cross-validated choice of for different numbers of nearest neighbors .
Figure 6: (Color online) Numerical experiments on a class of non-synergetic, but still nonlinearly coupled generalized additive models as analyzed in the supplement of Ref. Runge2012prl . Parameters as in Fig. 5, but with processes and . The orange box plot in (b) shows the relative prediction error if the optimal predictors from the model-free selection scheme are used in conjunction with the linear auto-regressive prediction model (12) (only the interquartile range shown).

We next compare the four schemes including the causal pre-selection algorithm on a larger class of synergetic nonlinear discrete-time stochastic processes with different coupling configurations:

(11)

where we are interested in predicting (i.e., ). The linear function is simply the sum of randomly chosen subprocesses (excluding process ) at random lags . The nonlinear function , on the other hand, is the product of randomly chosen subprocesses (excluding process and the ones already included in the linear term). The other subprocesses for are linearly driven by other randomly chosen subprocesses, also at random lags. The coefficients are fixed to , and . With this setup we generate an ensemble of realizations.

We run the four schemes at different choices of the heuristic parameter and using cross-validation checking up to predictors in the non-causal schemes. For the causal schemes, we use cross-validation for all estimated causal predictors (up to maximally , ranked by their CMI value in the algorithm Runge2012prl ). The causal drivers are estimated using the algorithm parameters , and with a fixed significance threshold (analyses for other thresholds are shown in the appendix).

In Fig. 5, we show various statistics comparing the computational complexities and the prediction errors. Here the number of possible predictors is yielding a computational complexity of for the MI-selection scheme (blue) and for the CMI-selection scheme (grey) using . The causal algorithm reduces the number of possible predictors to about (median). This corresponds to a true positive rate (TPR) of roughly (there are true drivers in model (VII), but several are only weakly driving) and a zero false discovery rate (FDR), while the MI- and CMI-selection schemes detect fewer causal drivers and much more false positives. Fewer predictors result in a lower computational complexity for the causal prediction schemes. The causal CMI-selection scheme runs extremely fast (black) and the complexity of the optimal scheme (red) strongly varies among the different realizations, since it depends exponentially on how many causal predictors are pre-selected (Fig. 3), but still typically even stays below the non-causal CMI-selection scheme. Using cross-validation, the MI-selection scheme uses typically (median) , the CMI-selection schemes both , and the optimal scheme only out of the true causal drivers for this model.

Finally, the relative prediction errors show that only the optimal scheme reaches the lowest possible errors with a median of zero relative error and even 90% of the ensemble below an error of . This demonstrates the large improvements due to the global optimization scheme that is only possible after reducing the set of variables to the few causal predictors.

The aforementioned results have been obtained using cross-validation to select the optimal . The computationally cheaper alternative using a heuristic criterion here yields drastic differences in the prediction performance depending on the choice of . While here values in the range give good results, in another experiment (Fig. 6) we found good predictions only for making it hard to provide rules of thumb in practical applications. In the appendix we show that also the length of the time series results in different optimal ranges for . On the other hand, for the optimal scheme the heuristic choice leads to almost the same minimal errors as in cross-validation.

To test the robustness of our results, we also compare the prediction schemes on a class of non-synergetic, but still nonlinearly coupled models (generalized additive models Hastie2009 with processes and polynomials of linear and quadratic degree) as analyzed in the supplement of Ref. Runge2012prl . For each ensemble member, we choose as a target variable the one with the largest sum of ‘incoming’ coefficients (absolute values). The results shown in Fig. 6 demonstrate that for this case also the causal CMI-selection scheme reaches optimal prediction errors. In the appendix, we show that the optimality is robust also for different significance thresholds and other time series lengths.

In Figs. 5(c) and Fig. 6(c) we evaluate the prediction for different phase space resolutions. To this end we use the causal predictors and run steps (ii) and (iii) of the prediction scheme from Fig. 1 (using cross-validation to choose ) for varying nearest-neighbor parameters and MMI estimation parameter , both set to the same value for consistency. While in the first ensemble (Fig. 5(c)) the error is minimal for very few neighbors and sharply rises if too many neighbors are used, in the second ensemble (Fig. 6(c)) too few neighbors yield worse results. In practice, the choice will very much depend on the process under study, but here we use a value in the range which constitutes a balance between local information and enough neighbors to reliably estimate .

Viii Model-free selection combined with model-based prediction

Up to now we have stayed in a model-free framework with information-theoretic optimal selection of predictors and a nearest-neighbor prediction. While nearest-neighbor prediction is a flexible method that will adapt to any function in Eq. (1), it will in many cases be outperformed by a model-based prediction – if the right model class is chosen. If a misspecified model is chosen for variable selection and fitting, it might miss out nonlinear combinations of predictors. For example, in our synergetic model (VI) a linear selection method would only include the weakly predictive variables and largely miss out the highly predictive variables . The functional dependency on the is, on the other hand, much better fitted with a linear model than with nearest neighbors.

To take advantage of improved model-based predictions and at the same time not miss out synergetic predictor combinations, we propose to apply our optimal predictor selection scheme and conduct the final prediction step by fitting a model on the optimal set of predictors. Here we demonstrate this approach on the non-synergetic model ensemble from Ref. Runge2012prl . To predict from the optimal subset of predictors

(chosen by cross-validation or the heuristic criterion), we use the ordinary least squares regression technique. Then the prediction interval is given by the variance

of the regression residual plus the errors in the estimated regression coefficients Brockwell2002 :

(12)

In Fig. 6(b) the results for this approach are shown as the orange box plot. The prediction improvement varies strongly for the different realizations which include nonlinear and linear drivers. About half of the realizations are better fitted using the optimized linear approach with prediction improvements of up to 10% compared to the nearest-neighbors prediction. More advanced techniques such as generalized additive models can further improve a prediction Hastie2009 . In addition to facilitating the prediction task, the knowledge of the functional forms of dependencies can also help to better understand coupling mechanisms.

Ix Predicting ENSO

Figure 7: (Color online) Prediction of the El Niño Southern Oscillation (ENSO) index Nino3.4 in the period 2003-2014 (up to December) using 1951-2002 as a learning set, causal algorithm run with significance threshold testing up to months. (a) Prediction error using nearest-neighbor prediction (solid red line) and linear prediction (dotted orange line) versus prediction step . For both approaches the same optimal predictors obtained from the model-free scheme with cross-validated (5-fold within the learning set) choice of predictors are used. (b) Nino3.4 index with El Niño and La Niña events marked in red and blue, respectively. The black lines denote selected hindcasts and their -prediction intervals (grey) using the linear prediction. The dots mark the starting time in May of each year, and the predicted values range from June () to October (). The arrows mark correct (red), missed (grey) and false (black) hindcasts of the Nino3.4 index exceeding with more than 49% probability. The green line marks a real forecast starting in December 2014 giving a probability for the Nino3.4 index to stay above of 55-70% for the months until May 2015.

The combined framework developed in the last section is now illustrated on a sea-surface temperature index of the El Niño Southern Oscillation (ENSO) in the tropical Pacific which has been the focus of prediction research for many decades due to its far-reaching climatic and economic impacts Cane1986 ; Barnston2012 . The Nino3.4 index is defined as the average sea-surface temperature over the region N-S, -W Al2008

. As another possible predictor variable, we use an atmospheric index based on sea-level pressure, the Southern Oscillation Index (SOI), which is computed from the surface air pressure difference between Tahiti and Darwin, Australia.

In Fig. 7(a), we employ the model-free causal algorithm and predictor selection (steps (i)-(ii) in Fig. 1, here optimized using cross-validation) to obtain the optimal predictors and compare the skill of the nearest-neighbor and the linear prediction using the auto-regressive model (12) fitted on the optimal predictors. Trained on the period 1951-2002, we test the prediction on the last decade 2003-2014. From the 24 possible predictors, the optimal predictor for month is only Nino3.4 at one month lag, while for months the three predictors are relevant indicating that the atmospheric coupling, including a long memory, constitutes an important predictive mechanism. Here, the linear auto-regressive model significantly reduces the prediction error by about compared to the nearest-neighbor approach using the same predictors, at least for a few months ahead. For steps larger than 5 months, the error in both approaches quickly reaches 1 which implies that the prediction is merely a persistence forecast. The better linear prediction is a sign that exploiting the nonlinearities in Nino3.4 Dommenget2013 does not improve the prediction much while the linear fit using the optimal predictors better harnesses the linear drivers of ENSO – at least on these time scales Gamez2004a .

To give an impression of selected predictions from the linear model (actually hindcasts), we show in Fig. 7(b) the predictions up to 5 months ahead starting from May in each year. The important onsets of El Niño events are determined by expert assessment, but one definition is the 3-month-running-mean smoothed Nino3.4 index exceeding , here marked by a red line (La Niñas, where the index decreases below are marked in blue). With our hindcasts starting in May of each year, one can compute the probability of an El Niño event as the part of the prediction distribution exceeding

(assuming a Gaussian distribution with mean and standard deviation given by Eq. (

12)). If this probability is larger than 49% for any of the 5 months ahead (until October), we predict an El Niño event. With this scheme we would have correctly predicted the moderate El Niño event in 2009 and the onset of the weak El Nino in current 2014-2015 season (red arrows), but missed the weak events of 2004 and 2006 (grey arrows, the latter being almost predicted with a probability of 48%). On the other hand, in 2011 (black arrow) a false alarm is given. The overall weak predictability of the recent El Niño events is also found in other studies using statistical as well as physical model predictions Barnston2012 and suggests that the mechanism of ENSO could be changing. Finally, our real forecast (green line) starting from December 2014 suggests that the weak current El Niño condition persists with a probability of 55-70% for the months until May 2015.

X Discussion and conclusions

In this article we have shown that the combinatorial explosion to search for globally optimal subsets of predictors can be overcome by restricting the search to causal drivers. Globally optimal predictors detect also synergetic mechanisms where the combination of multiple predictors strongly improves a prediction. Analytical considerations and numerical experiments indicate that such an approach is superior to schemes using MI-ranking or forward selection with conditional mutual information. Another advantage is that the computational complexity only scales with the number of causal predictors and not directly with the number of processes included in the analysis. If the set of causal predictors is not that large, the optimal scheme is even computationally less expensive than the non-causal CMI-selection scheme. To determine the optimal size of this set, we have found that a parameter-free heuristic criterion performs almost as good as a computationally much more demanding cross-validation.

Note that, even though theoretically only causal drivers can yield optimal predictions, non-causal variables could still be better predictors. Consider the case that a very high-dimensional process drives and . Then the prediction of from the causal drivers is deteriorated due to the curse of dimensionality for finite samples, while the non-causal process could potentially better aggregate this information. The same effect also explains why in Fig. 4 the CMI prediction using the non-causal together with the synergetic drivers has a slightly smaller prediction error than the causal CMI-selection scheme for (grey box plot).

While we propose the model-free selection of predictors for processes where the underlying mechanisms are poorly understood, the actual prediction can be much improved using suitable model-based techniques compared to a pure model-free nearest-neighbor prediction. This approach combines the advantage of a model-free approach to detect relevant variables with the smaller prediction variance of model-based methods and can also be used to better understand coupling mechanisms. The application of this combined approach significantly improves the prediction of an ENSO index compared to a nearest-neighbor scheme. The combined approach can be further improved by optimizing the number of predictors with a different criterion than the model-free criteria discussed in Sect. IV.2. Especially linear models can harness much more predictors before the problem of overfitting becomes severe.

Here the scope of application was the prediction of future values of a time series. In a forthcoming paper we will investigate how the scheme can be adapted if, for example, only forecasts for the emergence of extreme events like El Niños Ludescher2013 are needed. A Python script to estimate the causal predictors can be obtained from the author’s website at www.pik-potsdam.de/members/jakrunge.

Acknowledgments

We acknowledge the financial support by the German National Academic Foundation (Studienstiftung des deutschen Volkes), the Humboldt Graduate School, and the German Federal Ministry of Education and Research (Young Investigators Group CoSy-CC, grant no. 01LN1306A).

Appendix

I Robustness

Figure 8: (Color online) As in Fig. 6(a,b), but for a larger range of significance thresholds (row-wise from top left to bottom right).
Figure 9: (Color online) As in Fig. 5 (left column) and Fig. 6 (right column) but for a larger range of time-series lengths (top to bottom, test set lengths ).

In Fig. 8 we show the results as in Fig. 6 for different values of the significance threshold . Obviously this threshold affects the true positive and false discovery rate, which are, however, not directly of interest for the prediction task (as opposed to the causal inference problem). But a too low significance level in the causal pre-selection algorithm leads to a high computational complexity and also increases the variance in the optimal subset selection step, which results in higher prediction errors. If, on the other hand, the significance level is too high, too few predictors are available to optimize the prediction such that the resulting optimal predictors equal the pre-selected causal predictors . If the significance level is adjusted to yield just a few predictors more than the number of optimal predictors (obtained through cross-validation or the optimal heuristic criterion), the prediction error is minimal and also the computational complexity is lower than for the non-causal CMI-forward selection scheme.

We also evaluate the prediction schemes for time series lengths and . The results shown in Fig. 9 demonstrate that the optimal scheme also works for very short time series and is even better for longer time series. For and the synergetic model (VII) the optimal scheme even results in 75% of the realizations reaching the true minimal prediction error.

References

  • [1] P. J. Brockwell and R. A. Davis. Introduction to time series and forecasting. Springer, New York, 2nd edition, 2002.
  • [2] J. D. Farmer and J. J. Sidorowich. Predicting chaotic time series. Physical Review Letters, 59:845–848, 1987.
  • [3] H. D. I. Abarbanel, R. Brown, and J. B. Kadtke. Prediction in chaotic nonlinear systems: Methods for time series with broadband Fourier spectra. Physical Review A, 41(4):1782–1807, 1990.
  • [4] M. Giona, F. Lentini, and V. Cimagalli. Functional reconstruction and local prediction of chaotic time series. Physical Review A, 44(6):3496–3502, 1991.
  • [5] A. K. Alparslan, M. Sayar, and A. R. Atilgan. State-space prediction model for chaotic time series. Physical Review E, 58(2):2640–2643, 1998.
  • [6] B. Pompe. Mutual information and relevant variables for predictions. In Modelling and Forecasting Financial Data, volume 2, pages 61–92. Springer US, New York, 2002.
  • [7] M. Ragwitz and H. Kantz. Markov models from data by simple nonlinear time series predictors in delay embedding spaces. Physical Review E, 65(5):056201, 2002.
  • [8] E. Eisenstein, I. Kanter, D. A. Kessler, and W. Kinzel. Generation and prediction of time series by a neural network. Physical Review Letters, 74(1):6–9, 1995.
  • [9] George G. Szpiro.

    Forecasting chaotic time series with genetic algorithms.

    Physical Review E, 55(3):2557–2568, 1997.
  • [10] M. Small and C. Tse. Minimum description length neural networks for time series prediction. Physical Review E, 66(6):066701, 2002.
  • [11] F. Takens. Detecting strange attractors in turbulence. In D. A. Rand and L.-S. Young, editors, Dynamical systems and turbulence, Warwick 1980: Proceedings of a symposium held at the University of Warwick 1979-80, volume 898 of Lecture Notes in Mathematics, pages 366–381. Springer, New York, 1981.
  • [12] T. Sauer, J. A. Yorke, and M. Casdagli. Embedology. Journal of Statistical Physics, 65(3-4):579–616, 1991.
  • [13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, New York, 2nd edition, 2009.
  • [14] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Hoboken, 2006.
  • [15] A. Groth. Das Prinzip der maximalen Transinformation zur statistischen Modellierung. Diploma thesis, Ernst-Moritz-Arndt-Universität Greifswald, 2001.
  • [16] D. Kugiumtzis. Direct-coupling information measure from nonuniform embedding. Physical Review E, 87(6):062918, 2013.
  • [17] J. Runge, J. Heitzig, V. Petoukhov, and J. Kurths. Escaping the Curse of Dimensionality in Estimating Multivariate Transfer Entropy. Physical Review Letters, 108:258701–1–5, 2012.
  • [18] C. E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal, 27(3):379–423, 1948.
  • [19] B. Pompe and J. Runge. Momentary information transfer as a coupling measure of time series. Physical Review E, 83:051122, May 2011.
  • [20] S. L. Lauritzen. Graphical Models, volume 16. Clarendon Press, Oxford, 1996.
  • [21] M. Eichler. Graphical modelling of multivariate time series. Probability Theory and Related Fields, 153(1):233–268, 2012.
  • [22] J. Runge, V. Petoukhov, and J. Kurths. Quantifying the strength and delay of climatic interactions: the ambiguities of cross correlation and a novel measure based on graphical models. Journal of Climate, 27(2):720–739, 2014.
  • [23] P. Spirtes and C. Glymour. An algorithm for fast recovery of sparse causal graphs. Social Science Computer Review, 9(1):62–72, 1991.
  • [24] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search, volume 81. The MIT Press, Boston, 2000.
  • [25] S. Frenzel and B. Pompe. Partial mutual information for coupling analysis of multivariate time series. Physical Review Letters, 99(20):204101, 2007.
  • [26] J. Runge. Detecting and quantifying causality from time series of complex systems. Phd thesis, Humboldt University, 2014.
  • [27] J. Sun and E.M. Bollt. Causation entropy identifies indirect influences, dominance of neighbors and anticipatory couplings. Physica D, 267:49–57, 2014.
  • [28] J. Runge, J. Heitzig, N. Marwan, and J. Kurths. Quantifying Causal Coupling Strength: A Lag-specific Measure For Multivariate Time Series Related To Transfer Entropy. Physical Review E, 86(6):1–15, 2012.
  • [29] A. Kraskov, H. Stögbauer, and P. Grassberger. Estimating mutual information. Physical Review E, 69(6):066138, 2004.
  • [30] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 1974.
  • [31] M. A. Cane, S. E. Zebiak, and S. C. Dolan. Experimental forecasts of El Niño. Nature, 321:827–832, 1986.
  • [32] A. G. Barnston, Michael K. Tippett, Michelle L. L’Heureux, Shuhua Li, and David G. DeWitt. Skill of Real-Time Seasonal ENSO Model Predictions during 2002-11: Is Our Capability Increasing? Bulletin of the American Meteorological Society, 93(5):631–651, 2012.
  • [33] T. M. Smith, R. W. Reynolds, T. C. Peterson, and J. Lawrimore. Improvements to NOAA’s historical merged land-ocean surface temperature analysis (1880-2006). Journal of Climate, 21(10):2283–2296, 2008.
  • [34] D. Dommenget, T. Bayr, and C. Frauen. Analysis of the non-linearity in the pattern and time evolution of El Nino southern oscillation. Climate Dynamics, 40(11-12):2825–2847, 2013.
  • [35] A. J. Gámez, C. S. Zhou, A. Timmermann, and J. Kurths. Nonlinear dimensionality reduction in climate data. Nonlinear Processes in Geophysics, 11:393–398, 2004.
  • [36] J. Ludescher, A. Gozolchiani, M. I. Bogachev, A. Bunde, S. Havlin, and Hans Joachim Schellnhuber. Improved El Niño forecasting by cooperativity detection. Proceedings of the National Academy of Sciences, 110(29):11742–11745, 2013.