1 Introduction
Predicting a hierarchy of variables is an interesting application of machine learning. For instance, in hierarchical multilabel classification the classes are organized in a hierarchy: an example that belongs to some class automatically belongs to all its superclasses
(Vens et al., 2008).In this paper we focus on the problem of predicting time series that are organized into a hierarchy. For example, the total visitors of a country can be disaggregated according to the region; moreover, the visitors of each region can be further disaggregated according to the subregions. When forecasting, the summing constraints should be satisfied: i.e., the forecasted visitors of a region should match the sum of the forecasted visitors for its subregions.
Within the hierarchy, the most disaggregated time series are called bottom time series. Each aggregated time series (upper time series) is the sum of some bottom time series. We call base forecasts the forecasts which are computed independently for each time series, ignoring the summing constraints. Reconciliation algorithms (Hyndman et al., 2011, 2016; Wickramasuriya et al., 2018) adjust the base forecast in order to respect the summing constraints. Such methods have closedform solutions. The reconciled forecasts are usually also more accurate than the base forecasts. A limit of these methods is that they return point forecasts, but not a full predictive distribution. The only method for probabilistic reconciliation so far has been proposed by Taieb et al. (2017); due to the added complexity, it does not have a closedform solution.
We present a novel algorithm based on Bayes’ rule, which yields probabilistic reconciliation in closedform
. We treat the bottom time series as random variables and we assume them to be jointly Gaussian. We define their prior distribution based on the mean and the variances of the base forecasts. We then update the distribution of the bottom time series based on the information contained in the forecasts of the upper time series. In particular, each upper time series is the sums of some bottom ones. Within this framework, we can adopt the Bayes’ rule for linearGaussian models
(Bishop, 2006, Chap.2.3.3) in order to update the distribution of the bottom time series, treating the forecast of the upper time series as noisy observation of the sums of the bottom time series. We then derive the joint predictive distribution for the whole hierarchy based on the updated distribution of the bottom time series.We provide analytical formulae for the posterior distribution of the reconciled forecasts of the whole hierarchy. The reconciled bottom forecasts are a linear combination of the forecasts available for the whole hierarchy; the weights of the combination depend on the variances of the forecasts. Assuming that the noise characterizing the forecast for the upper time series is independent from the bottom time series, we prove that our reconciliation minimizes the expected mean square error, among all possible reconciliations which are linear in the upper time series values.
Reconciliation algorithms can be also applied to the related problem of reconciling the forecasts referring to the same variable, but computed at different frequency; these are temporal hierarchies (Athanasopoulos et al., 2017). Also in this case there are summing constraints (e.g., the sum of the monthly forecast for the next year should add up to the forecast for the next year done at the yearly scale), which can be satisfied by applying reconciliation algorithms to the base forecasts (in this case the base forecasts are produced using models with different time frequencies).
We compare our novel approach to stateoftheart reconciliation methods in extensive experiments, involving both hierarchical time series and temporal hierarchies. We show that in general our approach compares favorably to the competitors.
The paper is organized as follows. Section 2 introduces time series reconciliation and the stateoftheart minT algorithm. Section 3 presents our algorithm and proves its optimality. In Section 4 we analytically compare minT and our approach. In Section 5 we present the problem of reconciling temporal hierarchies; we present our experiments in Section 6 and the conclusions in Section 7.
2 Time series reconciliation
We consider a group of time series organized in a hierarchy. A reconciliation algorithm takes as input the base forecasts for the whole hierarchy and returns the reconciled forecasts for the whole hierarchy. Fig. 1 shows an example of hierarchy. We could interpret it as the revenues of a company, which are disaggregated first by product (A and B) and then by region (r1 and r2). The bottom time series are shaded.
We now introduce our notation. The vector of observations at time
is . The vector is composed by two parts, namely ; contains the observations of bottom time series while contains the observations of the upper time series (each upper time series is the sum of some bottom time series).The hierarchy in Fig. 1 is characterized by the vectors:
where:  
The structure of the hierarchy is encoded by matrix (summing matrix), such that:
(1) 
The matrix for the previous example is:
(2) 
where the submatrix encodes which bottom time series should be summed up in order to obtain each upper time series.
We assume to be at time and that the base forecasts refer to time ; in order to keep our notation simple, we do not show in our notation. If forecasts for different time horizons are needed (e.g., =1,2,3,..), the reconciliation is performed independently for each . The vector of base forecasts for the whole hierarchy at time is ; we separate base forecasts for bottom and upper time series, namely . Each base forecast is accompanied by its variance.
Grouped time series
.
The hierarchy of Fig.1 aggregates the bottom values by first summing the sales of the same product across regions, and then summing the sales of the two products in order to obtain the total. There is hence a unique aggregation path.
Grouped time series contain multiple aggregation paths. For instance a grouped time series could contain some additional nodes, which first sum the sales of different products in the same region, and eventually aggregate them in the total. This additional aggregations can be encoded by adding the relevant rows into matrix . Once has been enriched in this way, reconciliation algorithms can be applied.
Temporal hierarchies
.
Temporal hierarchies constitute another reconciliation problem. As an example, we could imagine the four bottom nodes of the hierarchy to contain the forecast for the quarters of the next year; the two intermediate nodes to contain the forecast for the two semesters of the next year; and the total node to contain the forecast at the yearly level. The forecasts for the next quarters, semesters and year are produced by models trained independently at different time scales; hence they do not satisfy the summing constraints. Also in this case, reconciliation algorithms can be applied in order to satisfy the summing constraints.
Reconciliation.
The first step of reconciliation is to update the bottom series’ forecasts on the basis of the forecasts available for the whole hierarchy. It is usually assumed (Hyndman et al., 2011, 2016; Wickramasuriya et al., 2018) that the updated bottom forecasts are a linear combination of the forecasts available for the whole hierarchy. The goal is hence to find such that:
(3) 
where denotes the updated forecasts for the bottom series. The reconciled forecasts for the whole hierarchy are then obtained as
The stateoftheart approach for estimating
is MinT (Wickramasuriya et al., 2018), which we briefly review in the following.2.1 The MinT reconciliation
We denote by the vector of forecasts errors at time . The base forecasts are assumed to be unbiased. Under this assumption, the vector of reconciled forecasts is unbiased if and only if . Wickramasuriya et al. (2018) build a generalized least squares model that minimizes the reconciled forecast error:
Recall that observations and forecast refer to hsteps ahead, even if this not shown in the notation. Assuming Wickramasuriya et al. (2018) show that the variance of the error is:
where is the covariance matrix of the base forecasts errors.
The optimal reconciliation matrix which minimizes under the constraint is:
(4) 
Understanding the estimation carried out by the above formula is not immediate; we provide a clarifying example in Sec. 4, which deals with a simple hierarchy. The performance of minT depends crucially on the estimation of the covariance matrix ; five different covariance estimators are tested in Wickramasuriya et al. (2018) to this end.
3 Reconciliation via Bayes’ rule
We propose a novel reconciliation algorithm based on Bayes’ rule. We treat the bottom time series as random variables, whose prior mean and variance equals the estimated moments of the base forecasts. It is common to assume the predictive distribution of a model trained on a
single time series to be Gaussian (Hyndman and Khandakar, 2008). We make however a more demanding assumption, by assuming the joint predictive distribution of the bottom time series to be Gaussian. Random variables that are marginally Gaussian are not necessarily jointly Gaussian: this consideration is for instance at the basis of the probabilistic reconciliation algorithm of Taieb et al. (2017), which is based on copulas. While our assumption is more restrictive, we will show experimentally that it does not hinder the robustness of our method. Moreover, it allows deriving the probabilistic reconciliation in closed form.We assume the prior distribution of the bottom time series to be a multivariate normal:
whose mean is the mean of the base forecast and whose covariance matrix is the covariance of the base forecasts; can be either diagonal or full as we discuss in Section 3.2.
We then denote by the point forecast for the upper time series and by their covariance. We assume also the joint predictive distribution of the upper forecasts to be Gaussian. We consider as a vector of noisy observations of sums of the bottom time series, which we use to update our prior belief about the bottom time series.
Recall that the hierarchy is defined by the matrix , where the submatrix indicates which bottom time series should be summed in order to obtain each upper time series. We model the base forecasts of the upper time series as a noisy observations of sums of the bottom time series:
where
is a mean zero normally distributed vector with covariance
, independent from . Hence:We now obtain the posterior distribution for the bottom time series according to the linearGaussian model (Bishop, 2006, Chap.2.3.3). Using Bayes’ rule, we update the distribution of some random variables given (noisy) observations of linear combinations of them. The posterior distribution of the bottom time series is:
(5) 
where . Hence the reconciled bottom forecasts are a weighted linear combination of the base forecast and the difference between the bottomup forecast () and the base forecasts of the upper time series. The weights depend on the variances of the forecasts, as we show in Sec. 4.
Interestingly, Eq. (5
) has the same structure as the update step of a Kalman filter
(DurrantWhyte and Henderson, 2008, Sec. 3.1), where is referred to as the gain matrix. Indeed the Kalman filter updates the prior distribution of the state variables (the bottom time series in our case) on the basis of noisy observations (the forecasts for the upper time series in our case) of linear combinations of the state variables.The posterior covariance is:
(6) 
Eventually we obtain the reconciled time series for the whole hierarchy as . The covariance of the reconciled forecasts is obtained by using Eq. (6): .
3.1 Optimality of Gaussian Bayesian reconciliation
In the Gaussian case, the conditional expectation of given is also the best linear unbiased predictor of given . We can write this optimality as
(7) 
where is any predictor of which is a linear function of .
Our objective is to find the vector of reconciled predictions that is as close as possible to .
In the following proposition we show that our reconciled predictions minimize the mean squared error among all possible linear reconciliation methods.
Proposition 1
Let denote the prediction of the bottom time series reconciled via the Bayesian approach of Eq. (5). The forecast for the whole hierarchy, obtained as , minimize the expectation of the mean squared error, i.e.
among all possible reconciled vectors .
3.2 The covariance matrices ,
The prior distributions for and the observation model for require the definition of the covariance matrices . In the following we show that the element of equals the covariance of the residuals. We can thus estimate the covariance of the forecasts with the covariance of the residuals. As diagonal elements, for example we obtain the variances of the residuals, i.e. the variance of the base forecasts We denote by the residual for the base forecast of the th bottom time series. The covariance of the residuals is:
where the third equality follows from law of total expectations and the conditional independence of the forecasts .
Hence we can estimate the covariance matrix with the sample covariance of the residuals. The diagonal elements () contain hence the variance of the residuals. The same result also holds for . In our implementation we assume , to be either diagonal or full; in the latter case we compute them using the graphical lasso procedure developed by (Zhao et al., 2012).
4 Analytical reconciliation of a simple hierarchy
In this section we analytically compare MinT and the Bayesian reconciliation on a simple hierarchy constituted by two bottom time series ( and ) and an upper time series .
4.1 Bayesian reconciliation
The base forecast for the bottom time series are constituted by the point forecasts and with variances and . Before reconciliation, our beliefs about and are hence:
For the moment we assume to be diagonal; later we discuss the case of a full . The summing matrix is:
The update forecasts for the bottom time series are:
(8) 
where is the base forecast for . Since and we get:
hence
We can expand equation (8) as
(9)  
(10) 
Equation (9) shows that if the base forecasts are already coherent (i.e., ), the base forecasts remain unchanged. Otherwise, they are shifted by an amount which is is proportional to the incoherence of the base forecasts. Equation (10) shows that the reconciled bottom forecast is a weighted average of and , with weights given by the matrix , proportional to () and () respectively.
The updated covariance matrix of the bottom time series can be computed with Eq. (6). In the example above in particular
This value is smaller than the base covariance (zero) as a result of the additional information which is now available. In particular, this offdiagonal element is negative because when a bottom time series increases the other one should decrease in order to be compatible with the (noisy) forecast available for the upper time series.
Since the reconciled bottom forecasts are a linear combination of the base forecasts of the whole hierarchy, we can explicit the equivalent matrix for the Bayesian reconciliation. If is diagonal, we get:
where and
If instead is originally full, we get:
with and , where we denote by the covariance .
4.2 The MinT reconciliation
We now explicit the matrix of MinT for the same example. As it is clear from equation (4), the choice of matrix affects the value of . Wickramasuriya et al. (2018) consider different options for the estimation of . The best results are obtained using a full covariance estimated via shrinkage; hence in the following we assume to be full.
We thus have:
where , , are the covariances between , and respectively. We can now use equation (4) to compute , obtaining:
where and .
Hence the weights of MinT depend also on the covariance between the forecasts of the bottom layer and of the upper layers. Instead the Bayesian updating assumes conditional independence between those two pieces of information.
4.3 Discussion
Even though minT and our approach are derived from different starting points (frequentist multivariate regression with correlated errors for the former; Bayesian linear Gaussian model for the former), the resulting reconciliations look similar. An advantage of our method is that it yields a full posterior predictive distribution besides the point forecasts.
Another difference is that minT estimates a larger number of correlations. Thus we expect the empirical performance of our approach to be characterized by smaller variance but higher bias than minT.
5 Reconciling temporal hierarchies
In temporal hierarchies the same variables is forecasted at different scales. With reference to Fig. 1, imagine the bottom nodes to contain the forecasts for every quarter of next year; the intermediate nodes to contain the forecasts for the two semesters of next year; the top node to contain the yearly forecast. If the forecast at the quarter, semester and yearly level are generated by independent models, they do not satisfy the summing constraints; hence they need to be reconciled.
Athanasopoulos et al. (2017) reconcile temporal hierarchies by using MinT. They consider three different approaches for estimating the covariance matrix, each yielding a different diagonal matrix. Offdiagonal elements are not considered, as it is not possible to reliably estimate the covariance between the error made by the same model step and steps ahead.
The approach called structural scaling achieves the best empirical results. This technique assigns the same variance to forecasts which have the same frequency and thus belong to the same layer of the hierarchy. On the hierarchy shown in Fig. 1 with the temporal interpretation explained above, structural scaling hence assigns the same variance to all the quarterly forecasts, even though they refer to different horizons (1 quarter ahead, 2 quarters ahead, 3 quarters ahead etc.). Moreover, structural scaling assumes the variance of the aggregated time series to be a multiple of the variance of the bottom time series. For instance, if we denote with the variance of the (bottom) quarterly forecast, structural scaling would assign variance to the semester forecast and to the yearly time series forecast.
We can easily apply our Bayesian reconciliation method to temporal hierarchies. We adopt diagonal prior covariances for and , with the variances originally returned by the models for the different time horizons. In general such variances increases with , therefore our covariance matrix is in principle more informative than that of structural scaling.
6 Experimental results
6.1 Implementation
We run our experiments in R and every package mentioned is available on CRAN. Our experiments can be reproduced by using the code available at https://github.com/gcorani/hierTs.
We compare our Bayesian approach against MinT, using the implementation available from hts (Hyndman et al., 2018). We run MinT adopting the shrinkage approach for estimating the full covariance matrix, as it yields the best performance in Wickramasuriya et al. (2018). We do not include previous methods (Hyndman et al., 2011, 2016), as they are outperformed by minT.
We consider two variants of the Bayesian reconciliation, which we call Bayesdiag and Bayescor. The approach Bayesdiag assumes both and to be diagonal. The diagonal elements correspond to the variances of the base forecast, as estimated by the models. The approach Bayescor estimates also the offdiagonal elements of and . We estimate and as the covariance matrix of the residuals of the different models, as discussed in Sec. 3.2.
We consider two algorithms for generating the base forecast. The first is ets, namely statespace exponential smoothing (Hyndman and Khandakar, 2008). The ets algorithm estimates several exponential smoothing models, characterized by different types of trend (none, multiplicative, additive, damped), seasonality (additive, multiplicative, none) and noise (additive or multiplicative). Model selection is eventually performed using the AIC criterion. The whole procedure is automatic.
Another method suitable for automatic forecasting is autoarima (Hyndman and Khandakar, 2008)
. The algorithm first checks whether the time series should be differentiated in order to be made stationary. The decision is based on hypothesis tests, which asses the presence of unit roots and seasonal unit roots. Once the relevant differentiation are done, the time series is supposed to be stationary. At this point, the optimal order of the autoregressive and of the moving average part is obtained through a search heuristic. We use the implementations of
ets and autoarima available from the forecast package (Hyndman and Khandakar, 2008).We measure the mean squared error for the step ahead forecasts over the times series of the whole hierarchy, which is:
We report in particular the median of the mse ratio over each set of experiments, for each different forecasting horizon . The mse ratio is defined as:
where it will be clear from the context whether we are referring to Bayesdiag or to Bayescor. Medians larger than 1 imply a better performance of the Bayesian approach and vice versa.
While the median
of the mse ratio over a set of experiments is a robust indicator, the mse ratio itself has a skewed distribution, being lowerbounded by 0 and having no upper bound. In order to check significance we consider the logarithm of the mse ratio, which has a symmetric distribution, We adopt in particular the Bayesian signedrank test
(Benavoli et al., 2014), which yields posterior probabilities that are numerically equivalent to the pvalue of the onesided signedrank test. They can be interpreted as the posterior probability of the median of the log of the mse ratio being positive or negative. We treat posterior probabilities exceeding 90% as
positive evidence and posterior probabilities exceeding 95% as strong evidence.Runtime.
The runtime of Bayesdiagonal is immediate; it only requires to invert a matrix, which is immediate for hierarchies containing even hundreds of time series. The runtime of Bayescorr can arrive up half a minute (on a standard laptop, for the tourism hierarchy which contains more than 500 time series) due the computation of the covariance matrix with graphical lasso and the selection of the optimal amount of shrinkage.
6.2 Simulation study
We run a simulation study on the two levelhierarchy described in Section 4. We generate the bottom time series using two ARMA(2,2) models, drawing their parameters uniformly from the convertible and stationary region. The noise of the two bottom time series is multivariate normal, with correlation ranging between +.8 to .8. We used both autoarima and ets as estimator and we consider sample sizes of 20 and 200. For each setting we run 500 simulations. When ets is used as estimator, the model is misspecified. In this case no significant difference can be detected between minT, Bayescorr and Bayesdiag (the medians of the mse ratios are always very close to 1).
When instead auto.arima is used as estimator and a long time series is available (n=200 in our simulations), minT reduces the mse of about 4% compared to both Bayesdiag and and of about 3% compared to Bayescorr. In this setting the correct datagenerating model can be retrieved and, since the time series are long, all the covariances can be accurately estimated. This helps minT performing better than our approach. Yet this setting is not very realistic. In the next sections we analyze complex hierarchies of real time series.
6.3 Grouped time series
We consider two real hierarchical data sets, (Hyndman et al., 2018): infantgts and tourism. The infantgts is available within the hts package while tourism is available in raw format from https://robjhyndman.com/publications/MinT/. We summarize their main properties in Tab. 1; we are unaware of other public hierarchical data sets.
data set  frequency  length  number of TS  layers 

infantgts  yearly  71  27  4 
tourism  monthly  228  555  8 
The infantgts, data set contains infant mortality counts in Australia, disaggregated by sex and by eight different states. Each time series contains 71 yearly observations, covering the period 19332003. This a grouped time series, whose hierarchy contains two possible aggregation paths. The bottom layer contains 16 time series (8 states x 2 genders). The first path aggregates the counts of the same sex across states, yielding 2 nodes (one for each sex) in the intermediate level; a further aggregation yields the total. The second path sums males and females in each state, yielding 8 nodes (one for each state) as a first aggregation; a further aggregation yields the total.
The tourism data set regards the number of nights spent by Australians away from home. The time series cover the period 1998–2016 with monthly frequency. There are 304 bottom time series, referring to 76 regions and 4 purposes. The first aggregation path sums the purposes in each region, yielding 76 time series (one for each region); such values are further aggregated into zones (27) and states (7), before reaching the total (1). A second path aggregates the bottom time series of the same zone (yielding 108 intermediate time series: 27 zones x 4 purposes), which are then aggregated into states (yielding 28 time series: 7 states x 4 purposes), then by states (yielding 4 time series: 4 purposes). The last aggregation yields the total. Overall the hierarchy contains 555 time series.
We reconcile each hierarchy 50 times: every time we extend the training data of one time step. Every time we reconcile the forecasts referring to 1step, 2steps, 3steps and 4steps ahead. There are no particular patterns across the results obtained at different forecast horizons (Tab. 2 and 3); hence we comment the results averaged over the different time horizons.
From Tables 2 and 3, it is clear that modelling correlation between time series is important. In fact both minT and Bayescorr can be seen as superior to Bayesdiag. Thus in the following we focus on comparing Bayescorr and minT.
On the infangts data set, the median mse ratios are 1.05 and 1.02 in the arima and in the ets case (hence they are both in favor of Bayescorr). The Bayesian signedrank test reports strong evidence in favor of Bayescorr in the arima case but no conclusive evidence in the ets case.
dset  h  estimator  Bayesdiag  Bayescor 

infantgts  1  arima  1.02  1.11 
infantgts  2  arima  1.04  1.06 
infantgts  3  arima  1.01  1.01 
infantgts  4  arima  1.01  1.01 
avg.  1.02  1.05  
p (median (mse ratio) >1)  >.99  >.99  
p (median (mse ratio) <1)  <.01  <.01  
infantgts  1  ets  0.97  1.03 
infantgts  2  ets  0.99  1.06 
infantgts  3  ets  0.99  1.00 
infantgts  4  ets  0.99  0.99 
avg.  10.98  1.02  
p (median (mse ratio) >1)  0.09  0.66  
p (median (mse ratio) <1)  0.91  0.34 
dset  h  estimator  Bayesdiag  Bayescor 

tourism  1  arima  0.94  0.97 
tourism  2  arima  0.91  1.01 
tourism  3  arima  0.90  0.99 
tourism  4  arima  0.95  0.98 
avg.  0.92  0.99  
p (median (mse ratio) >1)  <0.01  0.09  
p (median (mse ratio) <1)  >0.99  0.91  
tourism  1  ets  0.94  1.07 
tourism  2  ets  0.98  1.06 
tourism  3  ets  0.97  1.04 
tourism  4  ets  0.95  1.06 
avg  0.96  1.06  
p (median (mse ratio) >1)  <0.01  >.99  
p (median (mse ratio) <1)  >0.99  <0.01 
On the tourism data set, the median mse ratios are 0.99 and 1.06 in the arima and in the ets case. The Bayesian signedrank test shows positive evidence in favor of minT in the arima case, and strong evidence in favor of Bayescor in the ets case. These experiments overall show a slight advantage of Bayescorr over minT in terms of median performance.
We further compare the two methods by analysing the variability of the results across experiments and by comparing them also to the base forecasts. In each experiment, we compute the log of the relative mean squared error of each method, defined as:
where method can be MinT or Bayescorr.
For the statistical analysis we merge the log (relative mse) referring to different forecast horizons. Fig. 2 shows the boxplots referring to the different cases (infantgts/tourism, ets/arima). The area under the blue dashed line is a region where the reconciliaton methods have a lower mse than the base forecast, and vice versa. The extreme values are around , which corresponds to a reduction of % of mse compared to the base forecast.
Bayescorr has much lower variance than MinT. Moreover, its 75% percentile (improving the base forecast in more than 75% of the reconciliations) is below 0 in three experimental studies out of four; in no case MinT achieves such a result. Also the 95th percentiles of Bayescorr (95th worst case when compared to the base forecasts) are much lower than those of MinT. On the other hand, MinT is able to achieve lower values of the low percentiles (5th and 25th) than Bayescorr.
We argue that in general Bayescorr compares favorably to minT, since it slightly reduces the median error and it largely reduces the variance of the performance; moreover it yields the predictive distribution of the reconciled forecasts and not only the point forecasts.
6.4 Temporal Hierarchies
The application of MinT to temporal hierarchies is due to Athanasopoulos et al. (2017), who called thief the resulting algorithm; it is available from the thief package (Hyndman and Kourentzes, 2018). We run thief with structural scaling and we compare it with Bayesdiag. The covariance matrix of Bayesdiag contains the variances estimated by the models for the different forecast horizons. For instance if a hierarchy contains 12 monthly forecasts in the bottom layer and 1 yearly forecast in the upper layer, the covariance matrix contains the variances of the monthly forecasts 1step up to 12steps ahead, and the variance of the yearly forecast. On each time series we measure the mse on the whole hierarchy yielded by thief and Bayesdiag. We repeat the experiments using arima and ets as base forecast method.
frequency  monthly (M3)  quarterly (M3)  weekly (AE)  

fmethod  ets  arima  ets  arima  arima 
num time series  1428  1428  756  756  13( 20) 
median  1.01  0.99  1.09  1.06  14.14 
p(median mse ratio >1)  0.96  0.05  >0.99  >0.99  >0.99 
p(median mse ratio <1)  0.04  0.95  <0.01  <0.01  <0.01 
We run experiments of temporal hierarchies using the time series of the M3 competition, available from the Mcomp package (Hyndman, 2018); it contains 1428 monthly time series and 756 quarterly time series. Moreover we consider the 13 weekly time series provided within the package thief. We reconcile each time series using thief and Bayes diagonal, keeping track of their mse.
As for the monthly series, we build the temporal hierarchy by aggregating them into 2month, 4month and 6month aggregates. Each time series of the Mcomp package is split into training and test set. We produce forecasts for oneyear ahead, and we use the first year of the test set to evaluate the reconciled forecasts.
In table 4 we report the results. The performance of thief and Bayesdiag is balanced on the monthly time series: Bayesdiag prevails in the ets case (median mse ratio 1.01, with strong evidence in favor of Bayesdiag), and thief prevails in the arima case (median mse ratio 0.99, with positive evidence in favor of thief).
We then run experiments on the 756 quarterly time series. The training length varies between 16 and 64 quarters. We aggregate each quarterly time series to the semiannual and to the annual level. We produce forecasts for oneyear ahead, we reconcile them and we use the first year of the test set in order to measure the mse of the reconciled forecasts. Bayesdiag consistently outperforms thief in both the ets (median mse ratio: 1.09 and strong evidence reported in favor of Bayesdiag by the test) and the arima case (median mse ratio: 1.06 and strong evidence reported in favor of Bayesdiag by the test).
We finally consider the AE dataset provided with the thief package. It contains the weekly demand of Accident & Emergency departments in the UK, from November 2010 to June 2015. It contains 13 different weekly time series. We aggregate each time series to different frequencies (weekly, 2weekly, 4weekly, quarterly, biannual, annual). As in the previous case, we evaluate the reconciled forecast over a test set of one year. We run only the arima model, as the ets algorithm does not support this type of periodicity. To obtain a large sample of results we reconcile each time series 20 times; we thus perform 20 13 = 260 reconciliation; every time we slide of one week ahead in time both training and test set. Bayesdiag prevails also in this case: the median mse ratio is 1.14, and the signedrank test reports strong statistical evidence in favor of Bayesdiag over thief.
Finally, we assess the variability of the performance of both thief and Bayesdiag, using the log of the relative mse. The boxplots are given in Fig. 3. The scale of the graphs are quite large, as the whiskers of the boxplots are generally beyond , which represent a 10folds increase of decrease of mse compared to the base forecasts. Here in two cases out of three the 75th percentile of both approches is below 0, indicating a large improvement over the base forecasts. In this case the thief boxplots have generally smaller variance than those of Bayesdiag, as a result of the simpler approach adopted for estimating the covariance matrix. Hence the Bayesdiag is better as for the median and the lower percentiles, while thief is better as for the upper percentiles. This means that in the worst cases, that imply a worsening compared to to the base forecasts, thief mitigates better the damages. Yet in the weekly case study, which contains plenty of data, the 95th percentile of Bayesdiag is about the same of thief, while all the other percentiles are in favor of Bayesdiag. This is probably due to the fact that, thanks to the large sample size, the variances estimated by the models are reliable, resulting in an advantage for Bayesdiag. We conclude that in general Bayesdiag compares favorably to thief as for the reconciliation of temporal hierarchies.
7 Conclusions
We have presented a novel approach for reconciling hierarchical time series: it yields probabilistic reconciliation in closedform and it empirically compares favorably to the stateofartmethods. We recommend Bayescorr for the reconciliation of grouped time series and Bayesdiag for the reconciliation of temporal hierarchies. In future it would be interesting adopting Bayesian time series models to produce the base forecasts: they could provide a more reliable estimation of variance, which could make the reconciliation more accurate.
Acknowledgements
We thank Philipp Hennig for a discussion via email about Bayesian updating. We acknowledge support from the Swiss NSF grants ns. IZKSZ2–162188 and 167199–NRP 75 Big Data.
References
 Athanasopoulos et al. (2017) Athanasopoulos G, Hyndman RJ, Kourentzes N, Petropoulos F (2017) Forecasting with temporal hierarchies. European Journal of Operational Research 262(1):60–74
 Benavoli et al. (2014) Benavoli A, Mangili F, Corani G, Zaffalon M, Ruggeri F (2014) A Bayesian Wilcoxon signedrank test based on the Dirichlet process. In: Proceedings of the 31st International Conference on Machine Learning (ICML 2014), pp 1026–1034

Bishop (2006)
Bishop CM (2006) Pattern Recognition and Machine Learning. Springer
 DurrantWhyte and Henderson (2008) DurrantWhyte H, Henderson TC (2008) Multisensor data fusion. Springer handbook of Robotics pp 585–610
 Hyndman (2018) Hyndman R (2018) Mcomp: Data from the MCompetitions. URL https://CRAN.Rproject.org/package=Mcomp, R package version 2.8
 Hyndman et al. (2018) Hyndman R, Lee A, Wang E, Wickramasuriya S (2018) hts: Hierarchical and Grouped Time Series. URL https://CRAN.Rproject.org/package=hts, R package version 5.1.5
 Hyndman and Khandakar (2008) Hyndman RJ, Khandakar Y (2008) Automatic time series for forecasting: the forecast package for R. Journal of Statistical Software 3(27):1–22
 Hyndman and Kourentzes (2018) Hyndman RJ, Kourentzes N (2018) thief: Temporal HIErarchical Forecasting. URL http://pkg.robjhyndman.com/thief, R package version 0.3
 Hyndman et al. (2011) Hyndman RJ, Ahmed RA, Athanasopoulos G, Shang HL (2011) Optimal combination forecasts for hierarchical time series. Computational Statistics & Data Analysis 55(9):2579 – 2589
 Hyndman et al. (2016) Hyndman RJ, Lee AJ, Wang E (2016) Fast computation of reconciled forecasts for hierarchical and grouped time series. Computational Statistics & Data Analysis 97:16–32
 Taieb et al. (2017) Taieb SB, Taylor JW, Hyndman RJ (2017) Coherent probabilistic forecasts for hierarchical time series. In: Precup D, Teh YW (eds) Proceedings of the 34th International Conference on Machine Learning, vol 70, pp 3348–3357

Vens et al. (2008)
Vens C, Struyf J, Schietgat L, Džeroski S, Blockeel H (2008) Decision trees for hierarchical multilabel classification. Machine learning 73(2):185
 Wickramasuriya et al. (2018) Wickramasuriya SL, Athanasopoulos G, Hyndman RJ (2018) Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization. Journal of the American Statistical Association (accepted):1–45
 Zhao et al. (2012) Zhao T, Liu H, Roeder K, Lafferty J, Wasserman L (2012) The Huge Package for Highdimensional Undirected Graph Estimation in R. Journal of Machine Learning Research 13(1):1059–1062
Appendix A Proof of Proposition 1
Lemma 1
Consider , , if minimizes , then minimizes .
Proof
Note that the first norm is on and the second on .
where is the Frobenius norm of a matrix. Since does not depend on , the minimizer of minimizes
Comments
There are no comments yet.