# Reconciling Hierarchical Forecasts via Bayes' Rule

When time series are organized into hierarchies, the forecasts have to satisfy some summing constraints. Forecasts which are independently generated for each time series (base forecasts) do not satisfy the constraints. Reconciliation algorithms adjust the base forecast in order to satisfy the summing constraints: in general they also improve the accuracy. We present a novel reconciliation algorithm based on Bayes' rule; we discuss under which assumptions it is optimal and we show in extensive experiments that it compares favorably to the state-of-the-art reconciliation methods.

## Authors

• 8 publications
• 4 publications
• 16 publications
• ### Fully reconciled GDP forecasts from Income and Expenditure sides

In this paper we re-consider the results of Athanasopoulos et al. (2019)...
04/08/2020 ∙ by Luisa Bisaglia, et al. ∙ 0

• ### Time-series Scenario Forecasting

Many applications require the ability to judge uncertainty of time-serie...
11/13/2012 ∙ by Sriharsha Veeramachaneni, et al. ∙ 0

• ### Optimal Combination Forecasts on Retail Multi-Dimensional Sales Data

Time series data in the retail world are particularly rich in terms of d...
03/22/2019 ∙ by Luis Roque, et al. ∙ 0

• ### A multi-series framework for demand forecasts in E-commerce

Sales forecasts are crucial for the E-commerce business. State-of-the-ar...
05/31/2019 ∙ by Rémy Garnier, et al. ∙ 0

• ### Forecasting of a Hierarchical Functional Time Series on Example of Macromodel for Day and Night Air Pollution in Silesia Region: A Critical Overview

In economics we often face a system, which intrinsically imposes a struc...
11/25/2017 ∙ by Daniel Kosiorowski, et al. ∙ 0

• ### Construction and Visualization of Optimal Confidence Sets for Frequentist Distributional Forecasts

The focus of this paper is on the quantification of sampling variation i...
08/05/2017 ∙ by David Harris, et al. ∙ 0

• ### Testing Forecast Rationality for Measures of Central Tendency

Rational respondents to economic surveys may report as a point forecast ...
10/28/2019 ∙ by Timo Dimitriadis, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Predicting a hierarchy of variables is an interesting application of machine learning. For instance, in hierarchical multi-label classification the classes are organized in a hierarchy: an example that belongs to some class automatically belongs to all its superclasses

(Vens et al., 2008).

In this paper we focus on the problem of predicting time series that are organized into a hierarchy. For example, the total visitors of a country can be disaggregated according to the region; moreover, the visitors of each region can be further disaggregated according to the sub-regions. When forecasting, the summing constraints should be satisfied: i.e., the forecasted visitors of a region should match the sum of the forecasted visitors for its sub-regions.

Within the hierarchy, the most disaggregated time series are called bottom time series. Each aggregated time series (upper time series) is the sum of some bottom time series. We call base forecasts the forecasts which are computed independently for each time series, ignoring the summing constraints. Reconciliation algorithms (Hyndman et al., 2011, 2016; Wickramasuriya et al., 2018) adjust the base forecast in order to respect the summing constraints. Such methods have closed-form solutions. The reconciled forecasts are usually also more accurate than the base forecasts. A limit of these methods is that they return point forecasts, but not a full predictive distribution. The only method for probabilistic reconciliation so far has been proposed by Taieb et al. (2017); due to the added complexity, it does not have a closed-form solution.

We present a novel algorithm based on Bayes’ rule, which yields probabilistic reconciliation in closed-form

. We treat the bottom time series as random variables and we assume them to be jointly Gaussian. We define their prior distribution based on the mean and the variances of the base forecasts. We then update the distribution of the bottom time series based on the information contained in the forecasts of the upper time series. In particular, each upper time series is the sums of some bottom ones. Within this framework, we can adopt the Bayes’ rule for linear-Gaussian models

(Bishop, 2006, Chap.2.3.3) in order to update the distribution of the bottom time series, treating the forecast of the upper time series as noisy observation of the sums of the bottom time series. We then derive the joint predictive distribution for the whole hierarchy based on the updated distribution of the bottom time series.

We provide analytical formulae for the posterior distribution of the reconciled forecasts of the whole hierarchy. The reconciled bottom forecasts are a linear combination of the forecasts available for the whole hierarchy; the weights of the combination depend on the variances of the forecasts. Assuming that the noise characterizing the forecast for the upper time series is independent from the bottom time series, we prove that our reconciliation minimizes the expected mean square error, among all possible reconciliations which are linear in the upper time series values.

Reconciliation algorithms can be also applied to the related problem of reconciling the forecasts referring to the same variable, but computed at different frequency; these are temporal hierarchies (Athanasopoulos et al., 2017). Also in this case there are summing constraints (e.g., the sum of the monthly forecast for the next year should add up to the forecast for the next year done at the yearly scale), which can be satisfied by applying reconciliation algorithms to the base forecasts (in this case the base forecasts are produced using models with different time frequencies).

We compare our novel approach to state-of-the-art reconciliation methods in extensive experiments, involving both hierarchical time series and temporal hierarchies. We show that in general our approach compares favorably to the competitors.

The paper is organized as follows. Section 2 introduces time series reconciliation and the state-of-the-art minT algorithm. Section 3 presents our algorithm and proves its optimality. In Section 4 we analytically compare minT and our approach. In Section 5 we present the problem of reconciling temporal hierarchies; we present our experiments in Section 6 and the conclusions in Section 7.

## 2 Time series reconciliation

We consider a group of time series organized in a hierarchy. A reconciliation algorithm takes as input the base forecasts for the whole hierarchy and returns the reconciled forecasts for the whole hierarchy. Fig. 1 shows an example of hierarchy. We could interpret it as the revenues of a company, which are disaggregated first by product (A and B) and then by region (r1 and r2). The bottom time series are shaded.

We now introduce our notation. The vector of observations at time

is . The vector is composed by two parts, namely ; contains the observations of bottom time series while contains the observations of the upper time series (each upper time series is the sum of some bottom time series).

The hierarchy in Fig. 1 is characterized by the vectors:

 yt=[yTotal,yA,yB,yA1,yA2,yB1,yB2]T=[ut,bt]T, where: ut=[yTotal,yA,yB]T bt=[yA1,yA2,yB1,yB2]T

The structure of the hierarchy is encoded by matrix (summing matrix), such that:

 yt=Sbt. (1)

The matrix for the previous example is:

 S=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1111110000111000010000100001⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣\makebox(0.0,0.0)\huge{A}\cdashline1−8\makebox(0.0,0.0)\huge{I}⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (2)

where the sub-matrix encodes which bottom time series should be summed up in order to obtain each upper time series.

We assume to be at time and that the base forecasts refer to time ; in order to keep our notation simple, we do not show in our notation. If forecasts for different time horizons are needed (e.g., =1,2,3,..), the reconciliation is performed independently for each . The vector of base forecasts for the whole hierarchy at time is ; we separate base forecasts for bottom and upper time series, namely . Each base forecast is accompanied by its variance.

#### Grouped time series

.

The hierarchy of Fig.1 aggregates the bottom values by first summing the sales of the same product across regions, and then summing the sales of the two products in order to obtain the total. There is hence a unique aggregation path.

Grouped time series contain multiple aggregation paths. For instance a grouped time series could contain some additional nodes, which first sum the sales of different products in the same region, and eventually aggregate them in the total. This additional aggregations can be encoded by adding the relevant rows into matrix . Once has been enriched in this way, reconciliation algorithms can be applied.

#### Temporal hierarchies

.

Temporal hierarchies constitute another reconciliation problem. As an example, we could imagine the four bottom nodes of the hierarchy to contain the forecast for the quarters of the next year; the two intermediate nodes to contain the forecast for the two semesters of the next year; and the total node to contain the forecast at the yearly level. The forecasts for the next quarters, semesters and year are produced by models trained independently at different time scales; hence they do not satisfy the summing constraints. Also in this case, reconciliation algorithms can be applied in order to satisfy the summing constraints.

#### Reconciliation.

The first step of reconciliation is to update the bottom series’ forecasts on the basis of the forecasts available for the whole hierarchy. It is usually assumed (Hyndman et al., 2011, 2016; Wickramasuriya et al., 2018) that the updated bottom forecasts are a linear combination of the forecasts available for the whole hierarchy. The goal is hence to find such that:

 ˜bt=Pˆyt, (3)

where denotes the updated forecasts for the bottom series. The reconciled forecasts for the whole hierarchy are then obtained as

 ˜yt=S˜bt=SPˆyt.

The state-of-the-art approach for estimating

is MinT (Wickramasuriya et al., 2018), which we briefly review in the following.

### 2.1 The MinT reconciliation

We denote by the vector of forecasts errors at time . The base forecasts are assumed to be unbiased. Under this assumption, the vector of reconciled forecasts is unbiased if and only if . Wickramasuriya et al. (2018) build a generalized least squares model that minimizes the reconciled forecast error:

 ˜et:=yt−˜yt.

Recall that observations and forecast refer to h-steps ahead, even if this not shown in the notation. Assuming Wickramasuriya et al. (2018) show that the variance of the error is:

 Var[˜e∣I]=SPWPTST,

where is the covariance matrix of the base forecasts errors.

The optimal reconciliation matrix which minimizes under the constraint is:

 (4)

Understanding the estimation carried out by the above formula is not immediate; we provide a clarifying example in Sec. 4, which deals with a simple hierarchy. The performance of minT depends crucially on the estimation of the covariance matrix ; five different covariance estimators are tested in Wickramasuriya et al. (2018) to this end.

## 3 Reconciliation via Bayes’ rule

We propose a novel reconciliation algorithm based on Bayes’ rule. We treat the bottom time series as random variables, whose prior mean and variance equals the estimated moments of the base forecasts. It is common to assume the predictive distribution of a model trained on a

single time series to be Gaussian (Hyndman and Khandakar, 2008). We make however a more demanding assumption, by assuming the joint predictive distribution of the bottom time series to be Gaussian. Random variables that are marginally Gaussian are not necessarily jointly Gaussian: this consideration is for instance at the basis of the probabilistic reconciliation algorithm of Taieb et al. (2017), which is based on copulas. While our assumption is more restrictive, we will show experimentally that it does not hinder the robustness of our method. Moreover, it allows deriving the probabilistic reconciliation in closed form.

We assume the prior distribution of the bottom time series to be a multivariate normal:

 p(B) =N(B;ˆb,ˆΣB),

whose mean is the mean of the base forecast and whose covariance matrix is the covariance of the base forecasts; can be either diagonal or full as we discuss in Section 3.2.

We then denote by the point forecast for the upper time series and by their covariance. We assume also the joint predictive distribution of the upper forecasts to be Gaussian. We consider as a vector of noisy observations of sums of the bottom time series, which we use to update our prior belief about the bottom time series.

Recall that the hierarchy is defined by the matrix , where the sub-matrix indicates which bottom time series should be summed in order to obtain each upper time series. We model the base forecasts of the upper time series as a noisy observations of sums of the bottom time series:

 ˆU=AB+ε,

where

is a mean zero normally distributed vector with covariance

, independent from . Hence:

 p(ˆU∣B) =N(ˆU;AB,ˆΣU),

We now obtain the posterior distribution for the bottom time series according to the linear-Gaussian model (Bishop, 2006, Chap.2.3.3). Using Bayes’ rule, we update the distribution of some random variables given (noisy) observations of linear combinations of them. The posterior distribution of the bottom time series is:

 ˜b :=E[B∣ˆu]=ˆb+ˆΣBAT(ˆΣU+AˆΣBAT)−1(ˆu−Aˆb)= =ˆb+G(ˆu−Aˆb), (5)

where . Hence the reconciled bottom forecasts are a weighted linear combination of the base forecast and the difference between the bottom-up forecast () and the base forecasts of the upper time series. The weights depend on the variances of the forecasts, as we show in Sec. 4.

Interestingly, Eq. (5

) has the same structure as the update step of a Kalman filter

(Durrant-Whyte and Henderson, 2008, Sec. 3.1), where is referred to as the gain matrix. Indeed the Kalman filter updates the prior distribution of the state variables (the bottom time series in our case) on the basis of noisy observations (the forecasts for the upper time series in our case) of linear combinations of the state variables.

The posterior covariance is:

 Var[B∣ˆu] =ˆΣB−ˆΣBAT(ˆΣU+AˆΣBAT)−1AˆΣB =ˆΣB−G(ˆΣU+AˆΣBAT)GT (6)

Eventually we obtain the reconciled time series for the whole hierarchy as . The covariance of the reconciled forecasts is obtained by using Eq. (6): .

### 3.1 Optimality of Gaussian Bayesian reconciliation

In the Gaussian case, the conditional expectation of given is also the best linear unbiased predictor of given . We can write this optimality as

 ˜bt(h)=argminβ(ˆu)E[∥Bt+h−β(ˆu)∥22∣ˆu]. (7)

where is any predictor of which is a linear function of .

Our objective is to find the vector of reconciled predictions that is as close as possible to .

In the following proposition we show that our reconciled predictions minimize the mean squared error among all possible linear reconciliation methods.

###### Proposition 1

Let denote the prediction of the bottom time series reconciled via the Bayesian approach of Eq. (5). The forecast for the whole hierarchy, obtained as , minimize the expectation of the mean squared error, i.e.

 ˜yt(h)=argminζ(ˆu)E[∥∥yt+h−ζ(ˆu)∥∥∣ˆu],

among all possible reconciled vectors .

###### Proof (Sketch)

We first prove that is unbiased from which it follows that is unbiased too. By exploiting the optimality in (7) we obtain the result. We give the complete proof in appendix A.

### 3.2 The covariance matrices ˆΣB, ˆΣU

The prior distributions for and the observation model for require the definition of the covariance matrices . In the following we show that the element of equals the covariance of the residuals. We can thus estimate the covariance of the forecasts with the covariance of the residuals. As diagonal elements, for example we obtain the variances of the residuals, i.e. the variance of the base forecasts We denote by the residual for the base forecast of the th bottom time series. The covariance of the residuals is:

 Cov(ˆe(i),ˆe(j)∣I) =E[(Bi−ˆBi)(Bj−ˆBj)∣I] =E[BiBj∣I]−E[Bi∣I]E[Bj∣I] =Cov(Bi,Bj∣I)=Cov(ˆBi,ˆBj)

where the third equality follows from law of total expectations and the conditional independence of the forecasts .

Hence we can estimate the covariance matrix with the sample covariance of the residuals. The diagonal elements () contain hence the variance of the residuals. The same result also holds for . In our implementation we assume , to be either diagonal or full; in the latter case we compute them using the graphical lasso procedure developed by (Zhao et al., 2012).

## 4 Analytical reconciliation of a simple hierarchy

In this section we analytically compare MinT and the Bayesian reconciliation on a simple hierarchy constituted by two bottom time series ( and ) and an upper time series .

### 4.1 Bayesian reconciliation

The base forecast for the bottom time series are constituted by the point forecasts and with variances and . Before reconciliation, our beliefs about and are hence:

 (B1B2) ∼N[(ˆb1ˆb2),(σ2100σ22,)].

For the moment we assume to be diagonal; later we discuss the case of a full . The summing matrix is:

 S=⎡⎢⎣11\cdashline1−21001⎤⎥⎦=⎡⎢⎣\omit\span\omitA\cdashline1−21001⎤⎥⎦.

The update forecasts for the bottom time series are:

 ˜b =ˆb+G(ˆu−Aˆb), (8)

where is the base forecast for . Since and we get:

 ATˆΣBA=σ21+σ22, ˆΣBAT=[σ21σ22]T ˆΣU+AˆΣBAT=σ2u+σ21+σ22

hence

We can expand equation (8) as

 ˜b =[ˆb1ˆb2]+[σ21σ22]ˆu−ATˆbσ2u+σ21+σ22 (9) =⎡⎢ ⎢ ⎢ ⎢⎣ˆb1(1−σ21σ2u+σ21+σ22)+(ˆu−ˆb2)σ22σ2u+σ21+σ22ˆb2(1−σ22σ2u+σ21+σ22)+(ˆu−ˆb1)σ21σ2u+σ21+σ22⎤⎥ ⎥ ⎥ ⎥⎦=[ˆb1(1−g1)+(ˆu−ˆb2)g1ˆb2(1−g2)+(ˆu−ˆb1)g2] (10)

Equation (9) shows that if the base forecasts are already coherent (i.e., ), the base forecasts remain unchanged. Otherwise, they are shifted by an amount which is is proportional to the incoherence of the base forecasts. Equation (10) shows that the reconciled bottom forecast is a weighted average of and , with weights given by the matrix , proportional to () and () respectively.

The updated covariance matrix of the bottom time series can be computed with Eq. (6). In the example above in particular

 ˜σ1,2=Cov(˜B1,˜B2)=−σ21σ22σ21+σ22+σ2u.

This value is smaller than the base covariance (zero) as a result of the additional information which is now available. In particular, this off-diagonal element is negative because when a bottom time series increases the other one should decrease in order to be compatible with the (noisy) forecast available for the upper time series.

Since the reconciled bottom forecasts are a linear combination of the base forecasts of the whole hierarchy, we can explicit the equivalent matrix for the Bayesian reconciliation. If is diagonal, we get:

 PBayes, diagonal=[pdiag11−pdiag1−pdiag1pdiag2−pdiag21−pdiag2]

where and

If instead is originally full, we get:

 PBayes, full=[pfull11−pfull1−pfull1pfull2−pfull21−pfull2]

with and , where we denote by the covariance .

### 4.2 The MinT reconciliation

We now explicit the matrix of MinT for the same example. As it is clear from equation (4), the choice of matrix affects the value of . Wickramasuriya et al. (2018) consider different options for the estimation of . The best results are obtained using a full covariance estimated via shrinkage; hence in the following we assume to be full.

We thus have:

 W=⎡⎢ ⎢⎣σ2uσu,1σu,2σu,1σ21σ1,2σu,2,σ1,2σ22⎤⎥ ⎥⎦

where , , are the covariances between , and respectively. We can now use equation (4) to compute , obtaining:

 PMinT=[p11−p1−p1p11−p1−p2]

where and .

Hence the weights of MinT depend also on the covariance between the forecasts of the bottom layer and of the upper layers. Instead the Bayesian updating assumes conditional independence between those two pieces of information.

### 4.3 Discussion

Even though minT and our approach are derived from different starting points (frequentist multivariate regression with correlated errors for the former; Bayesian linear Gaussian model for the former), the resulting reconciliations look similar. An advantage of our method is that it yields a full posterior predictive distribution besides the point forecasts.

Another difference is that minT estimates a larger number of correlations. Thus we expect the empirical performance of our approach to be characterized by smaller variance but higher bias than minT.

## 5 Reconciling temporal hierarchies

In temporal hierarchies the same variables is forecasted at different scales. With reference to Fig. 1, imagine the bottom nodes to contain the forecasts for every quarter of next year; the intermediate nodes to contain the forecasts for the two semesters of next year; the top node to contain the yearly forecast. If the forecast at the quarter, semester and yearly level are generated by independent models, they do not satisfy the summing constraints; hence they need to be reconciled.

Athanasopoulos et al. (2017) reconcile temporal hierarchies by using MinT. They consider three different approaches for estimating the covariance matrix, each yielding a different diagonal matrix. Off-diagonal elements are not considered, as it is not possible to reliably estimate the covariance between the error made by the same model -step and -steps ahead.

The approach called structural scaling achieves the best empirical results. This technique assigns the same variance to forecasts which have the same frequency and thus belong to the same layer of the hierarchy. On the hierarchy shown in Fig. 1 with the temporal interpretation explained above, structural scaling hence assigns the same variance to all the quarterly forecasts, even though they refer to different horizons (1 quarter ahead, 2 quarters ahead, 3 quarters ahead etc.). Moreover, structural scaling assumes the variance of the aggregated time series to be a multiple of the variance of the bottom time series. For instance, if we denote with the variance of the (bottom) quarterly forecast, structural scaling would assign variance to the semester forecast and to the yearly time series forecast.

We can easily apply our Bayesian reconciliation method to temporal hierarchies. We adopt diagonal prior covariances for and , with the variances originally returned by the models for the different time horizons. In general such variances increases with , therefore our covariance matrix is in principle more informative than that of structural scaling.

## 6 Experimental results

### 6.1 Implementation

We run our experiments in R and every package mentioned is available on CRAN. Our experiments can be reproduced by using the code available at https://github.com/gcorani/hierTs.

We compare our Bayesian approach against MinT, using the implementation available from hts (Hyndman et al., 2018). We run MinT adopting the shrinkage approach for estimating the full covariance matrix, as it yields the best performance in Wickramasuriya et al. (2018). We do not include previous methods (Hyndman et al., 2011, 2016), as they are outperformed by minT.

We consider two variants of the Bayesian reconciliation, which we call Bayes-diag and Bayes-cor. The approach Bayes-diag assumes both and to be diagonal. The diagonal elements correspond to the variances of the base forecast, as estimated by the models. The approach Bayes-cor estimates also the off-diagonal elements of and . We estimate and as the covariance matrix of the residuals of the different models, as discussed in Sec. 3.2.

We consider two algorithms for generating the base forecast. The first is ets, namely state-space exponential smoothing (Hyndman and Khandakar, 2008). The ets algorithm estimates several exponential smoothing models, characterized by different types of trend (none, multiplicative, additive, damped), seasonality (additive, multiplicative, none) and noise (additive or multiplicative). Model selection is eventually performed using the AIC criterion. The whole procedure is automatic.

Another method suitable for automatic forecasting is auto-arima (Hyndman and Khandakar, 2008)

. The algorithm first checks whether the time series should be differentiated in order to be made stationary. The decision is based on hypothesis tests, which asses the presence of unit roots and seasonal unit roots. Once the relevant differentiation are done, the time series is supposed to be stationary. At this point, the optimal order of the autoregressive and of the moving average part is obtained through a search heuristic. We use the implementations of

ets and auto-arima available from the forecast package (Hyndman and Khandakar, 2008).

We measure the mean squared error for the -step ahead forecasts over the times series of the whole hierarchy, which is:

 mse(h)=1m[yt+h−˜yt(h)]T[yt+h−˜yt(h)].

We report in particular the median of the mse ratio over each set of experiments, for each different forecasting horizon . The mse ratio is defined as:

 mse ratio(h)=mse(MinT,h)mse(Bayes,h),

where it will be clear from the context whether we are referring to Bayes-diag or to Bayes-cor. Medians larger than 1 imply a better performance of the Bayesian approach and vice versa.

While the median

of the mse ratio over a set of experiments is a robust indicator, the mse ratio itself has a skewed distribution, being lower-bounded by 0 and having no upper bound. In order to check significance we consider the logarithm of the mse ratio, which has a symmetric distribution, We adopt in particular the Bayesian signed-rank test

(Benavoli et al., 2014)

, which yields posterior probabilities that are numerically equivalent to the p-value of the one-sided signed-rank test. They can be interpreted as the posterior probability of the median of the log of the mse ratio being positive or negative. We treat posterior probabilities exceeding 90% as

positive evidence and posterior probabilities exceeding 95% as strong evidence.

#### Runtime.

The runtime of Bayes-diagonal is immediate; it only requires to invert a matrix, which is immediate for hierarchies containing even hundreds of time series. The runtime of Bayes-corr can arrive up half a minute (on a standard laptop, for the tourism hierarchy which contains more than 500 time series) due the computation of the covariance matrix with graphical lasso and the selection of the optimal amount of shrinkage.

### 6.2 Simulation study

We run a simulation study on the two level-hierarchy described in Section 4. We generate the bottom time series using two ARMA(2,2) models, drawing their parameters uniformly from the convertible and stationary region. The noise of the two bottom time series is multivariate normal, with correlation ranging between +.8 to -.8. We used both auto-arima and ets as estimator and we consider sample sizes of 20 and 200. For each setting we run 500 simulations. When ets is used as estimator, the model is misspecified. In this case no significant difference can be detected between minT, Bayes-corr and Bayes-diag (the medians of the mse ratios are always very close to 1).

When instead auto.arima is used as estimator and a long time series is available (n=200 in our simulations), minT reduces the mse of about 4% compared to both Bayes-diag and and of about 3% compared to Bayes-corr. In this setting the correct data-generating model can be retrieved and, since the time series are long, all the covariances can be accurately estimated. This helps minT performing better than our approach. Yet this setting is not very realistic. In the next sections we analyze complex hierarchies of real time series.

### 6.3 Grouped time series

We consider two real hierarchical data sets, (Hyndman et al., 2018): infantgts and tourism. The infantgts is available within the hts package while tourism is available in raw format from https://robjhyndman.com/publications/MinT/. We summarize their main properties in Tab. 1; we are unaware of other public hierarchical data sets.

The infantgts, data set contains infant mortality counts in Australia, disaggregated by sex and by eight different states. Each time series contains 71 yearly observations, covering the period 1933-2003. This a grouped time series, whose hierarchy contains two possible aggregation paths. The bottom layer contains 16 time series (8 states x 2 genders). The first path aggregates the counts of the same sex across states, yielding 2 nodes (one for each sex) in the intermediate level; a further aggregation yields the total. The second path sums males and females in each state, yielding 8 nodes (one for each state) as a first aggregation; a further aggregation yields the total.

The tourism data set regards the number of nights spent by Australians away from home. The time series cover the period 1998–2016 with monthly frequency. There are 304 bottom time series, referring to 76 regions and 4 purposes. The first aggregation path sums the purposes in each region, yielding 76 time series (one for each region); such values are further aggregated into zones (27) and states (7), before reaching the total (1). A second path aggregates the bottom time series of the same zone (yielding 108 intermediate time series: 27 zones x 4 purposes), which are then aggregated into states (yielding 28 time series: 7 states x 4 purposes), then by states (yielding 4 time series: 4 purposes). The last aggregation yields the total. Overall the hierarchy contains 555 time series.

We reconcile each hierarchy 50 times: every time we extend the training data of one time step. Every time we reconcile the forecasts referring to 1-step, 2-steps, 3-steps and 4-steps ahead. There are no particular patterns across the results obtained at different forecast horizons (Tab. 2 and 3); hence we comment the results averaged over the different time horizons.

From Tables 2 and  3, it is clear that modelling correlation between time series is important. In fact both minT and Bayes-corr can be seen as superior to Bayes-diag. Thus in the following we focus on comparing Bayes-corr and minT.

On the infangts data set, the median mse ratios are 1.05 and 1.02 in the arima and in the ets case (hence they are both in favor of Bayes-corr). The Bayesian signed-rank test reports strong evidence in favor of Bayes-corr in the arima case but no conclusive evidence in the ets case.

On the tourism data set, the median mse ratios are 0.99 and 1.06 in the arima and in the ets case. The Bayesian signed-rank test shows positive evidence in favor of minT in the arima case, and strong evidence in favor of Bayes-cor in the ets case. These experiments overall show a slight advantage of Bayes-corr over minT in terms of median performance.

We further compare the two methods by analysing the variability of the results across experiments and by comparing them also to the base forecasts. In each experiment, we compute the log of the relative mean squared error of each method, defined as:

 log (relative mse)=log10mse(method)mse(base),

where method can be MinT or Bayes-corr.

For the statistical analysis we merge the log (relative mse) referring to different forecast horizons. Fig. 2 shows the boxplots referring to the different cases (infantgts/tourism, ets/arima). The area under the blue dashed line is a region where the reconciliaton methods have a lower mse than the base forecast, and vice versa. The extreme values are around , which corresponds to a reduction of % of mse compared to the base forecast.

Bayes-corr has much lower variance than MinT. Moreover, its 75% percentile (improving the base forecast in more than 75% of the reconciliations) is below 0 in three experimental studies out of four; in no case MinT achieves such a result. Also the 95-th percentiles of Bayes-corr (95-th worst case when compared to the base forecasts) are much lower than those of MinT. On the other hand, MinT is able to achieve lower values of the low percentiles (5-th and 25-th) than Bayes-corr.

We argue that in general Bayes-corr compares favorably to minT, since it slightly reduces the median error and it largely reduces the variance of the performance; moreover it yields the predictive distribution of the reconciled forecasts and not only the point forecasts.

### 6.4 Temporal Hierarchies

The application of MinT to temporal hierarchies is due to Athanasopoulos et al. (2017), who called thief the resulting algorithm; it is available from the thief package (Hyndman and Kourentzes, 2018). We run thief with structural scaling and we compare it with Bayes-diag. The covariance matrix of Bayes-diag contains the variances estimated by the models for the different forecast horizons. For instance if a hierarchy contains 12 monthly forecasts in the bottom layer and 1 yearly forecast in the upper layer, the covariance matrix contains the variances of the monthly forecasts 1-step up to 12-steps ahead, and the variance of the yearly forecast. On each time series we measure the mse on the whole hierarchy yielded by thief and Bayes-diag. We repeat the experiments using arima and ets as base forecast method.

We run experiments of temporal hierarchies using the time series of the M3 competition, available from the Mcomp package (Hyndman, 2018); it contains 1428 monthly time series and 756 quarterly time series. Moreover we consider the 13 weekly time series provided within the package thief. We reconcile each time series using thief and Bayes diagonal, keeping track of their mse.

As for the monthly series, we build the temporal hierarchy by aggregating them into 2-month, 4-month and 6-month aggregates. Each time series of the Mcomp package is split into training and test set. We produce forecasts for one-year ahead, and we use the first year of the test set to evaluate the reconciled forecasts.

In table 4 we report the results. The performance of thief and Bayes-diag is balanced on the monthly time series: Bayes-diag prevails in the ets case (median mse ratio 1.01, with strong evidence in favor of Bayes-diag), and thief prevails in the arima case (median mse ratio 0.99, with positive evidence in favor of thief).

We then run experiments on the 756 quarterly time series. The training length varies between 16 and 64 quarters. We aggregate each quarterly time series to the semi-annual and to the annual level. We produce forecasts for one-year ahead, we reconcile them and we use the first year of the test set in order to measure the mse of the reconciled forecasts. Bayes-diag consistently outperforms thief in both the ets (median mse ratio: 1.09 and strong evidence reported in favor of Bayes-diag by the test) and the arima case (median mse ratio: 1.06 and strong evidence reported in favor of Bayes-diag by the test).

We finally consider the AE dataset provided with the thief package. It contains the weekly demand of Accident & Emergency departments in the UK, from November 2010 to June 2015. It contains 13 different weekly time series. We aggregate each time series to different frequencies (weekly, 2-weekly, 4-weekly, quarterly, biannual, annual). As in the previous case, we evaluate the reconciled forecast over a test set of one year. We run only the arima model, as the ets algorithm does not support this type of periodicity. To obtain a large sample of results we reconcile each time series 20 times; we thus perform 20 13 = 260 reconciliation; every time we slide of one week ahead in time both training and test set. Bayes-diag prevails also in this case: the median mse ratio is 1.14, and the signed-rank test reports strong statistical evidence in favor of Bayes-diag over thief.

Finally, we assess the variability of the performance of both thief and Bayes-diag, using the log of the relative mse. The boxplots are given in Fig. 3. The scale of the graphs are quite large, as the whiskers of the boxplots are generally beyond , which represent a 10-folds increase of decrease of mse compared to the base forecasts. Here in two cases out of three the 75-th percentile of both approches is below 0, indicating a large improvement over the base forecasts. In this case the thief boxplots have generally smaller variance than those of Bayes-diag, as a result of the simpler approach adopted for estimating the covariance matrix. Hence the Bayes-diag is better as for the median and the lower percentiles, while thief is better as for the upper percentiles. This means that in the worst cases, that imply a worsening compared to to the base forecasts, thief mitigates better the damages. Yet in the weekly case study, which contains plenty of data, the 95-th percentile of Bayes-diag is about the same of thief, while all the other percentiles are in favor of Bayes-diag. This is probably due to the fact that, thanks to the large sample size, the variances estimated by the models are reliable, resulting in an advantage for Bayes-diag. We conclude that in general Bayes-diag compares favorably to thief as for the reconciliation of temporal hierarchies.

## 7 Conclusions

We have presented a novel approach for reconciling hierarchical time series: it yields probabilistic reconciliation in closed-form and it empirically compares favorably to the state-of-art-methods. We recommend Bayes-corr for the reconciliation of grouped time series and Bayes-diag for the reconciliation of temporal hierarchies. In future it would be interesting adopting Bayesian time series models to produce the base forecasts: they could provide a more reliable estimation of variance, which could make the reconciliation more accurate.

## Acknowledgements

We thank Philipp Hennig for a discussion via e-mail about Bayesian updating. We acknowledge support from the Swiss NSF grants ns. IZKSZ2–162188 and 167199–NRP 75 Big Data.

## References

• Athanasopoulos et al. (2017) Athanasopoulos G, Hyndman RJ, Kourentzes N, Petropoulos F (2017) Forecasting with temporal hierarchies. European Journal of Operational Research 262(1):60–74
• Benavoli et al. (2014) Benavoli A, Mangili F, Corani G, Zaffalon M, Ruggeri F (2014) A Bayesian Wilcoxon signed-rank test based on the Dirichlet process. In: Proceedings of the 31st International Conference on Machine Learning (ICML 2014), pp 1026–1034
• Bishop (2006)

Bishop CM (2006) Pattern Recognition and Machine Learning. Springer

• Durrant-Whyte and Henderson (2008) Durrant-Whyte H, Henderson TC (2008) Multisensor data fusion. Springer handbook of Robotics pp 585–610
• Hyndman (2018) Hyndman R (2018) Mcomp: Data from the M-Competitions. URL https://CRAN.R-project.org/package=Mcomp, R package version 2.8
• Hyndman et al. (2018) Hyndman R, Lee A, Wang E, Wickramasuriya S (2018) hts: Hierarchical and Grouped Time Series. URL https://CRAN.R-project.org/package=hts, R package version 5.1.5
• Hyndman and Khandakar (2008) Hyndman RJ, Khandakar Y (2008) Automatic time series for forecasting: the forecast package for R. Journal of Statistical Software 3(27):1–22
• Hyndman and Kourentzes (2018) Hyndman RJ, Kourentzes N (2018) thief: Temporal HIErarchical Forecasting. URL http://pkg.robjhyndman.com/thief, R package version 0.3
• Hyndman et al. (2011) Hyndman RJ, Ahmed RA, Athanasopoulos G, Shang HL (2011) Optimal combination forecasts for hierarchical time series. Computational Statistics & Data Analysis 55(9):2579 – 2589
• Hyndman et al. (2016) Hyndman RJ, Lee AJ, Wang E (2016) Fast computation of reconciled forecasts for hierarchical and grouped time series. Computational Statistics & Data Analysis 97:16–32
• Taieb et al. (2017) Taieb SB, Taylor JW, Hyndman RJ (2017) Coherent probabilistic forecasts for hierarchical time series. In: Precup D, Teh YW (eds) Proceedings of the 34th International Conference on Machine Learning, vol 70, pp 3348–3357
• Vens et al. (2008)

Vens C, Struyf J, Schietgat L, Džeroski S, Blockeel H (2008) Decision trees for hierarchical multi-label classification. Machine learning 73(2):185

• Wickramasuriya et al. (2018) Wickramasuriya SL, Athanasopoulos G, Hyndman RJ (2018) Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization. Journal of the American Statistical Association (accepted):1–45
• Zhao et al. (2012) Zhao T, Liu H, Roeder K, Lafferty J, Wasserman L (2012) The Huge Package for High-dimensional Undirected Graph Estimation in R. Journal of Machine Learning Research 13(1):1059–1062

## Appendix A Proof of Proposition 1

###### Lemma 1

Consider , , if minimizes , then minimizes .

###### Proof

Note that the first norm is on and the second on .

where is the Frobenius norm of a matrix. Since does not depend on , the minimizer of minimizes

###### Proof (Proposition 1)

The predictor defined in (5) is unbiased, in fact , where we applied two times the law of iterated expectation. It also follows that the base forecasts are unbiased, therefore we have . Moreover

 E[∥Yt+h−ζ(ˆu)∥2∣ˆu] =E[∥SBt+h−Sβ(ˆu)∥2∣ˆu]

By lemma 1 we obtain that the minimizer is .