Dynamic principal component regression for forecasting functional time series in a group structure

09/01/2019 ∙ by Han Lin Shang, et al. ∙ Australian National University 0

When generating social policies and pricing annuity at national and subnational levels, it is essential both to forecast mortality accurately and ensure that forecasts at the subnational level add up to the forecasts at the national level. This has motivated recent developments in forecasting functional time series in a group structure, where static principal component analysis is used. In the presence of moderate to strong temporal dependence, static principal component analysis designed for independent and identically distributed functional data may be inadequate. Thus, through using the dynamic functional principal component analysis, we consider a functional time series forecasting method with static and dynamic principal component regression to forecast each series in a group structure. Through using the regional age-specific mortality rates in Japan obtained from the Japanese Mortality Database (2019), we investigate the point and interval forecast accuracies of our proposed extension, and subsequently make recommendations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

page 14

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, rapidly aging populations and increased longevity in many developed countries have been a growing concern for governments and societies. These concerns are centered on the sustainability of pensions and health and aged-care systems, especially given the increased longevity among populations. The importance of mortality has resulted in a surge of interest among government policymakers and urban and regional planners in accurately modeling and forecasting age-specific mortality at national and regional levels. An improvement in the forecast accuracy of mortality would be greatly beneficial for policy decisions regarding the allocation of current and future resources to regions. Further, future mortality rates represent input to determine the life, fixed-term, and delayed annuity prices, and thus, they are of great interest to the insurance and pension industries.

Many statistical methods have been proposed to forecast age-specific mortality rates and their associated life expectancies (c.f. Booth, 2006; Booth and Tickle, 2008; Currie et al., 2004; Girosi and King, 2008; Tickle and Booth, 2014, for earlier reviews). Of these, a significant milestone in demographic forecasting was the work of Lee and Carter (1992). The strengths of the Lee–Carter (LC) method are simplicity and robustness in cases where age-specific log mortality rates have linear trends (Booth et al., 2006). The weakness of the LC method is that it attempts to capture mortality patterns using one principal component and its associated scores. To overcome this deficiency, the LC method has been extended and modified in two streams. From the perspective of the time series of a matrix, Booth et al. (2002), Renshaw and Haberman (2003), and Cairns et al. (2006, 2009) proposed the use of more than one component in the LC method to model age-specific mortality. Cairns et al. (2006)

used a logistic transformation to model the relationship between the probability of death and age observed over time, and then

Cairns et al. (2009) extended this model by incorporating the cohort effect. Girosi and King (2008) and Wiśniowski et al. (2015) considered the Bayesian paradigm for parameter estimation and forecasting in the LC method.

A common drawback of these works is that a static (functional) principal component analysis is often used to decompose a time series of data matrices or curves. For the case of independent data, the eigendecomposition is usually conducted on (sample) variance function, seeking an optimal linear combination of observations that could maximize the variance function

(c.f., Ramsay and Silverman, 2005). When the temporal dependence in a functional time series is moderate or strong, a functional version of auto-correlation function at various lags may be significantly above the constant threshold. In this case, the extracted principal components from an estimated variance operator are not consistent because of temporal dependence, leading to erroneous estimators and loss of sample dynamic information. To rectify this issue, it is more sensible to use a long-run covariance function, which accounts for auto-correlation (c.f., Horváth and Kokoszka, 2012; Hörmann et al., 2015). Long-run covariance includes the variance as a component, yet also measures the auto-covariance at various positive and negative lags. From an estimated long-run covariance, we apply a dynamic approach to extract principal components.

This paper makes three contributions. First, we demonstrate the improvement of point and interval forecast accuracies that the dynamic functional principal component analysis (DFPCA) entails when compared with a functional time series method based on the static functional principal component analysis (FPCA) for modeling and forecasting individual age-specific mortality series at a short- to moderate-term forecast horizon. Second, in a group structure, we demonstrate the improvement of point and interval forecast accuracies of the DFPCA produces when compared with the functional time series method based on the statistic FPCA. Third, we present a procedure for constructing pointwise prediction intervals.

The remainder of this paper is given as follows. In Section 2, we present age-specific mortality rates obtained from the Japanese Mortality Database (2019). Section 3 describes a kernel sandwich estimator for estimating long-run covariance. Based on the estimated long-run covariance, we perform an eigendecomposition that extracts dynamic principal components and their associated scores. In Section 3.2, we adopt the procedure of Aue et al. (2015) to construct prediction intervals. In Section 4, we revisit two methods considered by Shang and Haberman (2017) to reconcile forecasts in a group structure. In the group structure, we also find that the dynamic functional principal component decomposition produces more accurate forecasts in comparison with the static functional principal component decomposition. In Sections 5 and 6, we evaluate and compare the short- to the moderate-term point and interval forecast accuracies between the FPCA and DFPCA within a group structure. The conclusions are presented in Section 7.

2 Japanese age-specific mortality rates for 47 prefectures

We study Japanese age-specific mortality rates from 1975 to 2016, obtained from the Japanese Mortality Database (2019). We consider ages from zero to 99 in single years of age, while the last age group contains all ages at and above 100 years. In Table 1, we present a data group structure where each row denotes a level of disaggregation. The top level displays the total age-specific mortality rates for Japan. The overall mortality rates can be split into different sex, region, or prefecture groups. There are eight regions in Japan, which contain a total of 47 prefectures. The most disaggregated data arise when we consider the mortality rates for each prefecture and each sex, giving a total of series. In total, across all levels of disaggregation, there are 168 series.

Level Number of series
Japan 1
Sex 2
Region 8
Sex Region 16
Prefecture 47
Sex Prefecture 94
Total 168
Table 1: Group structure of Japanese mortality rates.

To view the time-series evolution of the Japanese age-specific mortality, we present functional time series plots for the logarithm base 10 of female mortality rates in Japan and Hokkaido in Figures (a)a and (c)c, while the functional time series plots for the smoothed data are shown in Figures (b)b and (d)d. To smooth these functional time series, we use a penalized regression spline with monotonic constraint, where the monotonicity is imposed for ages at and above 65 (see, e.g., Hyndman and Ullah, 2007).

(a) Observed female mortality in Japan
(b) Smoothed female mortality in Japan
(c) Observed female mortality in Hokkaido
(d) Smoothed female mortality in Hokkaido
Figure 5: Functional time series graphical displays. Logarithm base 10 has a simpler interpretation than the natural logarithm.

As shown in Figure 5, by analyzing the changes in mortality as a function of both age and year , it is evident that mortality rates have displayed a gradual decline over the years. Mortality rates dip from their early childhood high, climb in the teen years, stabilize in the early twenties, and then steadily increase with age. For both females and males, log mortality rates have been declining over the years, especially in the younger and older ages. Further, the observed Japanese national female mortality rates contain less noise than the observed Hokkaido female mortality rates. Thus, the nonparametric smoothing technique is more useful for the Hokkaido series (noisier) than for the Japanese national series (less noisy).

3 Dynamic functional principal component analysis

The estimation of long-run covariance has not been considered in actuarial science, except for the work of Shang (2019). Using mortality rates in developed countries, Shang (2019) found that the dynamic principal components extracted from the long-run covariance function better capture temporal dependency than do the static principal components extracted from the variance function.

Using the bandwidth estimation procedure of Rice and Shang (2017) and the kernel sandwich estimator, we obtain an estimate of long-run covariance . We then apply functional principal component decomposition to the estimated long-run covariance and obtain the functional principal components and their associated scores. Through employing Karhunen–Loève expansion, a stochastic process, , can be expressed as

where denotes the mean function, , and

is an uncorrelated random variable with zero mean and unit variance. The principal component scores,

, are given by the projection of in the direction of the theigenfunction, –that is, .

The DFPCA summarizes the primary features of an infinite-dimensional object by its finite vital elements, and forms a base of dynamic functional principal component regression in Section 3.1. For theoretical, methodological, and applied aspects of DFPCA, consult the articles by Hörmann et al. (2015) and Rice and Shang (2017).

3.1 Dynamic functional principal component regression

Conditional on the observed time series of smooth functions, , the -step-ahead forecast of can be given by

where denotes a mean function; denotes a set of functional principal components; denotes the forecast principal component score obtained from a univariate time series forecasting method for th component; and denotes the retained number of components.

The number of components, , can be selected as the minimum that reaches a certain level of the cumulative proportion of variance (CPV) explained by the first leading components. Thus:

(1)

where denotes the estimated theigenvalue, and (see, e.g., Horváth and Kokoszka, 2012, p.41).

The criterion in (1) remains fixed with sample size , although the optimal order of should increase to infinity with . One way to accommodate such an aspect is to use the criterion of Hörmann and Kidziński (2015), which explicitly incorporates the behavior of the estimated eigenvalues . The number of components, , is selected by the rule

As pointed out by Hyndman et al. (2013), there is little influence on forecast accuracy if we select the number of components greater than the optimal (yet unknown) one. Thus, the selected value of is the maximum between the ones selected by the CPV criterion and the criterion of Hörmann and Kidziński (2015)–that is:

(2)

3.2 Constructing pointwise prediction intervals

To construct pointwise prediction intervals, we adopt the method of Aue et al. (2015). The method can be summarized in the following steps:

  1. Using all observations, we compute the

    -variate score vectors (

    ) and the sample (dynamic) functional principal components . We then compute in-sample point forecasts

    where are the elements of the -step-ahead prediction obtained from by a univariate time series forecasting method for .

  2. With the in-sample point forecasts, we compute the in-sample point forecast errors

    (3)

    where and .

  3. Based on the in-sample forecast errors, we take quantiles to obtain lower and upper prediction intervals, denoted by

    and , respectively, for a nominal coverage probability. Then, we seek a tuning parameter, , such that of the residual functions satisfies:

    Based on the law of large numbers, the residuals

    are expected to be approximately stationary and satisfy:

    where calibrates the difference between empirical and nominal coverage probabilities.

Step 3 can be extended to pointwise prediction intervals, where we have , such that of the residual data points satisfy:

(4)

Then, the -step-ahead pointwise prediction intervals of are given as:

where denotes the discretized data points.

4 Grouped functional time series forecasting

4.1 Notation

For ease of explanation, we introduce the notation using the Japanese example presented in Section 2. The Japanese data follow a multi-level geographical group structure coupled with a sex grouping variable. The group structure is shown in Figure 6. Japan is split into eight regions, which in turn can be divided into 47 prefectures.

Japan

R1

P1

R2

P2

P7

R8

P40

P47
Figure 6: Japanese geographical tree diagram, with eight regions and 47 prefectures–each node has female, male, and total age-specific mortality rates.

The data can also be split by sex. Thus, each of the nodes in the geographical hierarchy can also be split into both males and females. We refer to a particular disaggregated series using the notation , meaning the geographical area X and the sex S, where X can take the values shown in Figure 6 and S can take values M (males), F (females), or T (total). For example, denotes females in Region 1, denotes females and males in Prefecture 1, denotes males in Japan, and so on.

Let denote the exposure-at-risk for series in year and age , and let be the number of deaths for series in year and age . The age-specific mortality rate is expressed as

To simplify expressions, we drop the age argument of . Then, for a given age, we can write


or , where is a vector containing all series at all levels of disaggregation, is a vector of the most disaggregated series, and shows how the two are connected.

As evident in Figure 6, the group structure is not unique, since Japan can also be disaggregated by sex first. As a result of the non-uniqueness of the group structure, we consider the bottom-up and optimal combination methods of Shang and Haberman (2017), which are reviewed in Sections 4.2 and 4.3, respectively. Their point and interval forecast accuracies are compared with the independent forecasts (without reconciliation) in Section 5.

4.2 Bottom-up method

The bottom-up method first generates independent forecasts for each series at the most disaggregated level and then aggregates these to produce all required forecasts. The bottom-up method performs well when the bottom-level series have a high signal-to-noise ratio. However, the bottom-up method may lead to inaccurate forecasts of the top-level series when the series contains missing or noisy data at the bottom level.

4.3 Optimal combination method

A disadvantage of the bottom-up method is that only the bottom-level series are considered. This disadvantage motivated Hyndman et al. (2011)

to propose the optimal combination method, in which independent forecasts for all series are computed, and then the resultant forecasts are reconciled to satisfy the aggregation constraints via the summing matrix. The method is derived by expressing the independent forecasts as the response variable of the linear regression:

where is a matrix of -step-ahead independent forecasts for all series, stacked in the exact same order as for the original data; is the unknown mean of the independent forecasts of the most disaggregated series; and represents the forecast reconciliation errors.

To estimate the unknown regression coefficient, Hyndman et al. (2011) and Hyndman et al. (2016) proposed a weighted least squares solution:

where denotes matrix transpose and denotes a diagonal matrix. Assuming that and

denotes identical matrix, then the revised forecasts are given by:

where is an arbitrary constant. These reconciled forecasts are aggregate consistent, involve a combination of all the independent forecasts, and are computationally fast. From the perspective of statistical properties, the reconciled forecasts are unbiased because and . Note that it is possible to consider the weighted least squares estimator of Wickramasuriya et al. (2019), yet it may be computationally slower.

A crucial part of the improvement in forecast accuracy enjoyed by the reconciliation methods relies on the accurate forecast of the matrix. Recall that the matrix includes ratios of forecast exposure-to-risk. We make a reasonable cohort assumption: the observed ratios that form the matrix are forecast using the automatic autoregressive integrated moving average algorithm of Hyndman and Khandakar (2008) for age (c.f., Shang and Haberman, 2017). For an age above 0, the exposure-to-risk of age in the year will be the same as the exposure-to-risk of age in the year . Although we assume the population is closed (i.e., no external migration is allowed in Japan), this assumption is reasonable, and we only need to focus on the ratio of two exposure-to-risks.

5 Results: point forecasts

5.1 Functional time series model fitting

In the first row of Figure 7, we present the mean function for male log mortality rates in Hokkaido, and the first dynamic functional principal component, accounting for 95.4% of the total variation, and its associated principal component scores. In the second row, we present the first static functional principal component, accounting for 90.4% of the total variation, and its associated principal component scores.

Figure 7: Dynamic and static functional principal component decompositions for male log mortality rates in Hokkaido. The top panel shows the mean function and the first dynamic and static functional principal components, while the bottom panel shows the first dynamic and static functional principal component scores.

Based on the historical mortality from 1975 to 2016, we produce the point forecasts of age-specific mortality rates from 2017 to 2036. As shown in Figure 10, the mortality rates are continuing to decline.

(a) Dynamic functional principal component decomposition
(b) Static functional principal component decomposition
Figure 10: Point forecast of age-specific smoothed male mortality rates from 2017 to 2036 for the Japanese prefecture of Hokkaido. The historical smoothed functional time series is shown in gray and the forecasts are highlighted in rainbow colors.

In Table 2, we tabulate the percentage of total variation explained by the first dynamic or static functional principal component for the national and subnational smoothed log mortality rates. The first dynamic functional principal component always explains more variability than does the first static functional principal component.

Female Male Female Male
Prefecture FPCA DFPCA FPCA DFPCA Prefecture FPCA DFPCA FPCA DFPCA
Japan 0.969

0.981

0.965

0.974

Mie 0.849

0.949

0.778

0.893

Hokkaido 0.895

0.954

0.904

0.954

Shiga 0.876

0.962

0.771

0.898

Aomori 0.840

0.936

0.792

0.907

Kyoto 0.897

0.956

0.823

0.932

Iwate 0.714

0.858

0.735

0.837

Osaka 0.933

0.967

0.913

0.954

Miyagi 0.745

0.829

0.760

0.858

Hyogo 0.873

0.956

0.893

0.958

Akita 0.788

0.905

0.763

0.905

Nara 0.870

0.952

0.795

0.906

Yamagata 0.837

0.931

0.729

0.877

Wakayama 0.735

0.878

0.690

0.851

Fukushima 0.813

0.935

0.781

0.893

Tottori 0.770

0.906

0.699

0.859

Ibaraki 0.863

0.953

0.841

0.927

Shimane 0.805

0.927

0.714

0.873

Tochigi 0.874

0.952

0.786

0.905

Okayama 0.893

0.971

0.824

0.932

Gunma 0.872

0.950

0.817

0.932

Hiroshima 0.891

0.959

0.846

0.920

Saitama 0.915

0.963

0.887

0.945

Yamaguchi 0.815

0.938

0.755

0.889

Chiba 0.924

0.968

0.863

0.942

Tokushima 0.819

0.931

0.791

0.917

Tokyo 0.936

0.969

0.915

0.952

Kagawa 0.772

0.928

0.718

0.904

Kanagawa 0.915

0.962

0.885

0.945

Ehime 0.829

0.929

0.742

0.876

Niigata 0.883

0.955

0.865

0.954

Kochi 0.754

0.881

0.771

0.914

Toyama 0.841

0.951

0.737

0.884

Fukuoka 0.902

0.960

0.901

0.956

Ishikawa 0.820

0.938

0.764

0.901

Saga 0.851

0.950

0.750

0.883

Fukui 0.779

0.920

0.751

0.893

Nagasaki 0.848

0.950

0.827

0.935

Yamanashi 0.806

0.920

0.756

0.901

Kumamoto 0.871

0.946

0.846

0.950

Nagano 0.865

0.947

0.800

0.932

Oita 0.848

0.942

0.808

0.921

Gifu 0.869

0.962

0.795

0.911

Miyazaki 0.819

0.926

0.772

0.926

Shizuoka 0.891

0.961

0.858

0.933

Kagoshima 0.875

0.948

0.860

0.946

Aichi 0.938

0.976

0.897

0.949

Okinawa 0.834

0.925

0.762

0.890

Table 2: Percentage of total variation explained by the first dynamic and the first static functional principal component decompositions for fitting smoothed log mortality. The static functional principal component analysis is abbreviated as FPCA, while the dynamic functional principal component analysis is abbreviated as DFPCA.

5.2 Point forecast evaluation

An expanding window approach of a time series model is commonly used to assess model and parameter stabilities over time, and prediction accuracy. The expanding window approach assesses the constancy of a model’s parameter by computing parameter estimates and their resultant forecasts over an expanding window of a fixed size through the sample (for details, see Zivot and Wang, 2006, pp. 313-314). Using the first 12 observations from 1975 to 1986 in the Japanese age-specific mortality rates, we produce one- to 30-step-ahead point forecasts. Through a rolling window approach, we re-estimate the parameters in the time series forecasting models using the first 13 observations from 1975 to 1987. Forecasts from the estimated models are then produced for one- to 30-step-ahead. We iterate this process by increasing the sample size by one year until reaching the end of the data period in 2016. This process produces 30 one-step-ahead forecasts, 29 two-step-ahead forecasts, , and one 30-step-ahead forecast. We compare these forecasts with the holdout samples to determine the out-of-sample point forecast accuracy.

To evaluate the point forecast accuracy, we consider mean absolute forecast error (MAFE) and root mean squared forecast error (RMSFE). These criteria measure how close forecasts are to the actual values of the raw mortality rate being forecast, regardless of the direction of forecast errors. For each series, , these error measures can be expressed as:

where denotes the last year of the fitting period, denotes the actual holdout sample for the th age and th curve of the forecasting period in the th series, and denotes the point forecasts for the holdout sample.

By averaging MAFE and RMSFE across the number of series within each level of disaggregation, we obtain an overall assessment of the point forecast accuracy for each level within the collection of series, denoted by MAFE and RMSFE. These error measures are defined as:

where denotes the number of series at the th level of disaggregation, for . In the Japanese group structure, .

For 30 different forecast horizons, we consider two summary statistics to evaluate overall point forecast accuracy. The chosen summary statistics are the mean and median values because of their suitability for handling squared and absolute errors (Gneiting, 2011). These error measures are given by

Mean (RMSFE)
Median (MAFE)

where denotes an ordered statistic.

5.3 Comparison of point forecast accuracy

Through averaging over all the series at each level of the group structure, Figures 11 and 12 present the one-step-ahead to 30-step-ahead MAFE and RMSFE of the independent forecasts obtained from the functional time series method with the static and dynamic functional principal component decompositions. Given that we evaluate and assess the point forecast accuracy for 30 different forecast horizons, we consider two summary statistics. The median is ideal for absolute error, while the mean is ideal for squared error (Gneiting, 2011). From the MAFE and RMSFE measures, more accurate point forecasts can be obtained by the functional time series method with the dynamic functional principal component decomposition, at each level of the group structure. The forecast accuracy of the functional time series method with dynamic functional principal component decomposition is superior because it better captures the temporal dependence in each series.

Figure 11: MAFE comparison of the independent and reconciled forecasts obtained from the functional time series method with static and dynamic functional principal component decompositions.
Figure 12: RMSFE comparison of the independent and reconciled forecasts obtained from the functional time series method with static and dynamic functional principal component decompositions.

6 Results: interval forecasts

6.1 Interval forecast evaluation

To evaluate pointwise interval forecast accuracy, we use the interval score of Gneiting and Raftery (2007) and Gneiting and Katzfuss (2014). For each year in the forecasting period, the -step-ahead pointwise prediction intervals are computed at the nominal coverage probability. We consider the symmetric prediction intervals, with lower and upper bounds that are predictive quantiles at and , denoted by and . As defined by Gneiting and Raftery (2007), a scoring rule for the interval forecasts at time point for series is:

where represents the binary indicator function, and denotes the level of significance, customarily .

Given that more samples are needed to compute the discrepancy between the in-sample holdout curves and their corresponding forecast curves in (3), we consider 15 forecast horizons instead of 30 forecast horizons, as was the case for evaluating point forecast accuracy. Through averaging across ages and years in the forecasting period, we obtain the mean interval score, expressed as:

The mean interval score rewards a narrow prediction interval if, and only if, the actual observation lies within the prediction interval.

By averaging across the number of series within each level of disaggregation, we obtain an overall assessment of the interval forecast accuracy for each level within the collection of series, denoted by . The error measure is defined as:

where denotes the number of series at the th level of disaggregation, for .

For 15 different forecast horizons, we consider two summary statistics to evaluate overall interval forecast accuracy. The mean and median of are given by:

6.2 Comparison of interval forecast accuracy

Through averaging over all the series at each level of the group structure, Figure 13 presents the mean interval scores and their summary statistics for the independent forecasts obtained from the functional time series method with the static and dynamic functional principal component decompositions. As also found in Shang and Haberman (2017), the independent forecasting method produces the smallest mean interval scores at the top level, when the data are often of higher quality. Given that the grouped forecasting methods account for correlation between series, they can potentially improve interval forecast accuracy, especially at the bottom level. Improvement in interval forecast accuracy at the subnational level is essential for decision making at the regional level. When grouped forecasting methods are considered, the forecast accuracy of the functional time series method with DFPCA is generally superior–it better captures the temporal dependence in each series, especially at the short-term forecast horizon. Between the two grouped forecasting methods, the optimal combination method produces the smallest mean interval scores at the prefecture level, yet not at the regional and national levels.

With the independent forecasting method, we observe that the functional time series method with dynamic functional principal component decomposition does not always outperform that with static functional principal component decomposition. From (3) and (4), the difference in the in-sample errors between the static and dynamic functional principal component decompositions is absorbed when computing their corresponding interval forecasts.

Figure 13: Mean interval score comparison of the independent and reconciled forecasts obtained from the functional time series method with static and dynamic functional principal component decompositions.

7 Conclusion

Using national and subnational Japanese mortality data, we have evaluated and compared the point and interval forecast accuracies between the functional time series method with static and dynamic functional principal component analyses. Based on the forecast errors, the functional time series method with dynamic functional principal component decomposition generally outperforms that with static functional principal component decomposition. The superiority of the former is driven by the addition of auto-covariance in the estimated long-run covariance function.

We implemented a functional time series method to produce independent forecasts. We then considered the issue of forecast reconciliation by applying two grouped functional time series forecasting methods–namely, the bottom-up and optimal combination methods. The bottom-up method models and forecasts data series at the most disaggregated level, and then aggregates the forecasts using the summing matrix constructed by forecast exposure-to-risk. The optimal combination method combines the independent forecasts obtained from independent functional time series forecasting methods using linear regression. The optimal combination method generates a set of revised forecasts that are as close as possible to the independent forecasts, yet also aggregate consistently with the known grouping structure. Under some mild technical assumptions, the regression coefficient may be estimated by ordinary least squares.

Illustrated by the Japanese age-specific mortality data, we examined forecast accuracy. In particular, we compared one-step-ahead to 30-step-ahead point forecast accuracy, and one-step-ahead to 15-step-ahead interval forecast accuracy between the independent forecasting method and two grouped forecasting methods. In terms of point forecast accuracy, the grouped functional time series forecasting methods produce more accurate point forecasts than do those obtained by the independent forecasting method, averaged across all levels of the group structure. In terms of interval forecast accuracy, the grouped forecasting methods produce more accurate interval forecasts than does the independent forecasting method at the prefecture level, yet not at the regional and national levels. Between the two grouped forecasting methods, they perform on par for producing point forecasts. However, the optimal combination method is recommended for producing interval forecasts at least for the data we considered. The grouped forecasting methods produce forecasts that obey the original group structure, resulting in the forecast mortality rates at the subnational level adding up to the forecast mortality rates at the national level.

References

  • (1)
  • Aue et al. (2015) Aue, A., Norinho, D. D. and Hörmann, S. (2015), ‘On the prediction of stationary functional time series’, Journal of the American Statistical Association: Theory and Methods 110(509), 378–392.
  • Booth (2006) Booth, H. (2006), ‘Demographic forecasting: 1980 to 2005 in review’, International Journal of Forecasting 22(3), 547–581.
  • Booth et al. (2006) Booth, H., Hyndman, R. J., Tickle, L. and De Jong, P. (2006), ‘Lee-Carter mortality forecasting: A multi-country comparison of variants and extensions’, Demographic Research 15, 289–310.
  • Booth et al. (2002) Booth, H., Maindonald, J. and Smith, L. (2002), ‘Applying Lee-Carter under conditions of variable mortality decline’, Population Studies 56(3), 325–336.
  • Booth and Tickle (2008) Booth, H. and Tickle, L. (2008), ‘Mortality modelling and forecasting: A review of methods’, Annals of Actuarial Science 3(1-2), 3–43.
  • Cairns et al. (2006) Cairns, A. J. G., Blake, D. and Dowd, K. (2006), ‘A two-factor model for stochastic mortality with parameter uncertainty: Theory and calibration’, The Journal of Risk and Insurance 73(4), 687–718.
  • Cairns et al. (2009) Cairns, A. J. G., Blake, D., Dowd, K., Coughlan, G. D., Epstein, D., Ong, A. and Balevich, I. (2009), ‘A quantitative comparison of stochastic mortality models using data from England and Wales and the United States’, North American Actuarial Journal 13(1), 1–35.
  • Currie et al. (2004) Currie, I. D., Durban, M. and Eilers, P. H. C. (2004), ‘Smoothing and forecasting mortality rates’, Statistical Modelling 4(4), 279–298.
  • Girosi and King (2008) Girosi, F. and King, G. (2008), Demographic Forecasting, Princeton University Press, Princeton.
  • Gneiting (2011) Gneiting, T. (2011), ‘Making and evaluating point forecasts’, Journal of the American Statistical Association: Review Article 106(494), 746–762.
  • Gneiting and Katzfuss (2014) Gneiting, T. and Katzfuss, M. (2014), ‘Probabilistic forecasting’, Annual Review of Statistics and Its Application 1, 125–151.
  • Gneiting and Raftery (2007) Gneiting, T. and Raftery, A. E. (2007), ‘Strictly proper scoring rules, prediction and estimation’, Journal of the American Statistical Association: Review Article 102(477), 359–378.
  • Hörmann and Kidziński (2015) Hörmann, S. and Kidziński, L. (2015), ‘A note on estimation in Hilbertian linear models’, Scandinavian Journal of Statistics 42, 43–62.
  • Hörmann et al. (2015) Hörmann, S., Kidziński, L. and Hallin, M. (2015), ‘Dynamic functional principal components’, Journal of the Royal Statistical Society: Series B 77(2), 319–348.
  • Horváth and Kokoszka (2012) Horváth, L. and Kokoszka, P. (2012), Inference for Functional Data with Applications, Springer, New York.
  • Hyndman et al. (2011) Hyndman, R. J., Ahmed, R. A., Athanasopoulos, G. and Shang, H. L. (2011), ‘Optimal combination forecasts for hierarchical time series’, Computational Statistics and Data Analysis 55, 2579–2589.
  • Hyndman et al. (2013) Hyndman, R. J., Booth, H. and Yasmeen, F. (2013), ‘Coherent mortality forecasting: the product-ratio method with functional time series models’, Demography 50(1), 261–283.
  • Hyndman and Khandakar (2008) Hyndman, R. J. and Khandakar, Y. (2008), ‘Automatic time series forecasting: the forecast package for R’, Journal of Statistical Software 27(3).
  • Hyndman et al. (2016) Hyndman, R. J., Lee, A. and Wang, E. (2016), ‘Fast computation of reconciled forecasts for hierarchical and grouped time series’, Computational Statistics and Data Analysis 97, 16–32.
  • Hyndman and Ullah (2007) Hyndman, R. J. and Ullah, M. S. (2007), ‘Robust forecasting of mortality and fertility rates: A functional data approach’, Computational Statistics and Data Analysis 51(10), 4942–4956.
  • Japanese Mortality Database (2019) Japanese Mortality Database (2019), National Institute of Population and Social Security Research. Available at http://www.ipss.go.jp/p-toukei/JMD/index-en.asp (data downloaded on 14/August/2018).
  • Lee and Carter (1992) Lee, R. D. and Carter, L. R. (1992), ‘Modeling and forecasting U.S. mortality’, Journal of the American Statistical Association: Applications & Case Studies 87(419), 659–671.
  • Ramsay and Silverman (2005) Ramsay, J. O. and Silverman, B. (2005), Functional Data Analysis, 2nd edn, Springer, New York.
  • Renshaw and Haberman (2003) Renshaw, A. E. and Haberman, S. (2003), ‘Lee-Carter mortality forecasting: A parallel generalized linear modelling approach for England and Wales mortality projections’, Journal of the Royal Statistical Society: Series C 52(1), 119–137.
  • Rice and Shang (2017) Rice, G. and Shang, H. L. (2017), ‘A plug-in bandwidth selection procedure for long-run covariance estimation with stationary functional time series’, Journal of Time Series Analysis 38(4), 591–609.
  • Shang (2019) Shang, H. L. (2019), ‘Dynamic principal component regression: Application to age-specific mortality forecasting’, ASTIN Bulletin: The Journal of the IAA in press.
  • Shang and Haberman (2017) Shang, H. L. and Haberman, S. (2017), ‘Grouped multivariate and functional time series forecasting: An application to annuity pricing’, Insurance: Mathematics & Economics 75, 166–179.
  • Tickle and Booth (2014) Tickle, L. and Booth, H. (2014), ‘The longevity prospects of Australian seniors: An evaluation of forecast method and outcome’, Asia-Pacific Journal of Risk and Insurance 8(2), 259–292.
  • Wickramasuriya et al. (2019) Wickramasuriya, S. L., Athanasopoulos, G. and Hyndman, R. J. (2019), ‘Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization’, Journal of the American Statistical Association: Theory and Methods 114(526), 804–819.
  • Wiśniowski et al. (2015) Wiśniowski, A., Smith, P. W. F., Bijak, J., Raymer, J. and Forster, J. (2015), ‘Bayesian population forecasting: Extending the Lee-Carter method’, Demography 52(3), 1035–1059.
  • Zivot and Wang (2006) Zivot, E. and Wang, J. (2006), Modeling Financial Time Series with S-PLUS, Springer, New York.