Machine Learning vs Statistical Methods for Time Series Forecasting: Size Matters

09/29/2019 ∙ by Vítor Cerqueira, et al. ∙ Universidade do Porto Dalhousie University 0

Time series forecasting is one of the most active research topics. Machine learning methods have been increasingly adopted to solve these predictive tasks. However, in a recent work, these were shown to systematically present a lower predictive performance relative to simple statistical methods. In this work, we counter these results. We show that these are only valid under an extremely low sample size. Using a learning curve method, our results suggest that machine learning methods improve their relative predictive performance as the sample size grows. The code to reproduce the experiments is available at



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning is a subset of the field of artificial intelligence, which is devoted to developing algorithms that automatically learn from data

[Michalski et al.2013]. This area has been at the centre of important advances in science and technology. This includes problems involving forecasting, such as in the domains of energy [Voyant et al.2017], healthcare [Lee and Mark2010], management [Carbonneau et al.2008], or climate [Xingjian et al.2015].

Notwithstanding, despite gaining increasing attention, machine learning methods are still not well established in the forecasting literature, especially in the case of univariate time series. The forecasting literature is dominated by statistical methods based on linear processes, such as ARIMA [Chatfield2000] or exponential smoothing [Gardner Jr1985].

This matter is noticeable in the recent work by [Makridakis et al.2018]

, where the authors present evidence that traditional statistical methods systematically outperform machine learning methods for univariate time series forecasting. This includes algorithms such as the multi-layer perceptron or Gaussian processes. Most of the machine learning methods tested by the authors fail to outperform a simple seasonal random walk model. Makridakis and his colleagues conclude the paper by pointing out the need to find the reasons behind the poor predictive performance shown by machine learning forecasting models relative to statistical methods. We address this question in this paper.

Our working hypothesis is that the study presented by [Makridakis et al.2018] is biased in one crucial aspect: sample size. The authors draw their conclusion from a large set of 1045 monthly time series used in the well-known M3 competition [Makridakis and Hibon2000]. However, each of the time series is extremely small. The average, minimum, and maximum number of observations is 118, 66, and 144, respectively. We hypothesize that that these datasets are too small for machine learning models to generalize properly. Machine learning methods typically assume a functional form that is more flexible than that of statistical methods. Hence, they are more prone to overfit. When the size of the data is small, the sample may not be representative of the process generating the underlying time series. In such cases, machine learning methods model the spurious behavior represented in the sample.

In this context, our goal in this paper is to compare statistical methods with machine learning methods for time series forecasting, controlling for sample size.

1.1 Our Contribution

To test our hypothesis, we present an empirical analysis of the impact of sample size in the relative performance of different forecasting methods. We split these methods into two categories: machine learning methods and statistical methods. Machine learning methods are often based on statistical techniques so this split is often a bit artificial. However, in the interest of consistency with previous work on this topic [Makridakis et al.2018], we used the term statistical to refer to methods developed by the statistical and forecasting literature.

In our empirical analysis, we use 90 univariate time series from several domains of application. The results of our experiments show that the conclusions draw by [Makridakis et al.2018] are only valid when the sample size is small. That is, with small sample size, statistical methods show a better predictive performance compared to machine learning models. However, as the sample size grows, machine learning methods outperform the former.

The paper is organized as follows. In the next section, we provide a background to this paper. We formalize the time series forecasting task from a univariate perspective, and outline some state of the art methods to solve this problem. In Section 3, we present the experiments, which are discussed in Section 4. Finally, we conclude the paper in Section 5.

2 Background

2.1 Time Series Forecasting


denote a time series. Forecasting denotes the process of estimating the future values of

, , where denotes the forecasting horizon.

Quantitative approaches to time series forecasting are split into two categories: univariate and multivariate. Univariate methods refer to approaches that model future observations of a time series according to its past observations. Multivariate approaches extend univariate ones by considering additional time series that are used as explanatory variables. We will focus on univariate approaches in this work.

The forecasting horizon is another aspect to take into account when addressing time series prediction problems. Forecasting methods usually focus on one step ahead forecasting, i.e., the prediction of the next value of a time series (). Sometimes one is interested in predicting many steps into the future. These tasks are often referred to as multi-step forecasting [Taieb et al.2012]. Higher forecasting horizons typically lead to a more difficult predictive task due to the increased uncertainty [Weigend2018].

2.2 Time Series Models

Several models for time series analysis have been proposed in the literature. These are not only devised to forecast the future behaviour of time series but also to help understand the underlying structure of the data. In this section, we outline a few of the most commonly used forecasting methods.

The naive method, also known as the random walk forecast, predicts the future values of the time series according to the last known observation:


There is empirical evidence that this method presents a reasonable fit for financial time series data [Kilian and Taylor2003]. The seasonal naive model works similarly to the naive method. The difference is that the seasonal naive approach uses the previously known value from the same season of the intended forecast:


where denotes the seasonal period.

The ARMA (Auto-Regressive Moving Average) is one of the most commonly used methods to model univariate time series. ARMA(p,q) combines two components: AR(p), and MA(q).

According to the AR(p) model, the value of a given time series, , can be estimated using a linear combination of the p past observations, together with an error term and a constant term [Box et al.2015]:


where denote the model parameters, and represents the order of the model.

The AR(p) model uses the past values of the time series as explanatory variables. Similarly, the MA(q) model uses past errors as explanatory variables:


where denotes the mean of the observations, represents the parameters of the models and denotes the order of the model. Essentially, the method MA() models the time series according to random errors that occurred in the past lags [Chatfield2000].

Effectively, the model ARMA(p,q) can be constructed by combining the model AR(p) with the model MA(q):


The ARMA(p,q) is defined for stationary data. However, many interesting phenomena in the real-world exhibit a non-stationary structure, e.g. time series with trend and seasonality. The ARIMA(p,d,q) model overcomes this limitation by including an integration parameter of order . Essentially, ARIMA works by applying differencing transformations to the time series (until it becomes stationary), before applying ARMA(p,q).

The exponential smoothing model [Gardner Jr1985] is similar to the AR(p) model in the sense that it models the future values of time series using a linear combination of its past observations. In this case, however, exponential smoothing methods produce weighted averages of the past values, where the weight decays exponentially as the observations are older [Hyndman and Athanasopoulos2018]. For example, in a simple exponential smoothing method, the prediction for can be defined as follows:


where the represent the weights of past observations. There are several types of exponential smoothing methods. For a complete read, we refer to the work by [Hyndman and Athanasopoulos2018].

2.3 More on the AR(p) Model

From a machine learning perspective, time series forecasting is usually formalized as an auto-regressive task, i.e., based on an AR(p) model. This type of procedures projects a time series into a Euclidean space according to Taken’s theorem regarding time delay embedding [Takens1981].

Using common terminology in the machine learning literature, a set of observations (, ) is constructed [Michalski et al.2013]. In each observation, the value of is modelled based on the past values before it: , where

, which represents the vector of values we want to predict, and

represents the feature vector. The objective is to construct a model , where denotes the regression function. In other words, the principle behind this approach is to model the conditional distribution of the -th value of the time series given its past values: (). In essence, this approach leads to a multiple regression problem. The temporal dependency is modelled by having past observations as explanatory variables. Following this formulation, we can resort to any algorithm from the regression toolbox to solve the predictive task.

2.4 Related Work

Machine learning methods have been increasingly used to tackle univariate time series forecasting problems. However, there is a small amount of work comparing their predictive performance relative to traditional statistical methods. [Hill et al.1996]

compared a multi-layer perceptron with statistical methods. The neural network method is shown to perform significantly better than the latter.

[Ahmed et al.2010] present an analysis of different machine learning methods for this task using time series from the M3 competition [Makridakis and Hibon2000]. Their results suggest that the multi-layer perceptron and Gaussian processes methods show the best predictive performance. However, the authors do not compare these methods with state of the art approaches, such as ARIMA or exponential smoothing. In a different case study, [Cerqueira et al.2019] compare different forecasting models, including statistical methods and machine learning methods. In their analysis, the latter approaches present a better average rank (better predictive performance) relative to the former. Particularly, a rule-based regression model, which is a variant to the model tree by Quinlan, present the best average rank across 62 time series. [Makridakis et al.2018] extends the study by [Ahmed et al.2010]

by including several statistical methods in their experimental setup. Their results suggest that most of the statistical methods systematically outperform machine learning methods for univariate time series forecasting. This effect is noticeable for one-step and multi-step forecasting. The machine learning methods the authors analyze include different types of neural networks (e.g. a long short-term memory, multi-layer perceptron), the nearest neighbors method, a decision tree, support vector regression, and Gaussian processes. On the other hand, the statistical methods include ARIMA, naive, exponential smoothing, and theta, among others.

Despite the extension of their comparative study, we hypothesize that the experimental setup designed by [Makridakis et al.2018] is biased in terms of sample size. They use a large set of 1045 time series from the M3 competition. However, each one of these time series is extremely small in size. The average number of observations is 116. Our working hypothesis is that, in these conditions, machine learning methods are unable to learn an adequate regression function for generalization. In the next section, we present a new study comparing traditional statistical methods with machine learning methods. Our objective is to test the hypothesis outlined above and check whether sample size has any effect on the relative predictive performance of different types of forecasting methods.

3 Empirical Experiments

Our goal in this paper is to address the following research question:

  • Is sample size important in the relative predictive performance of forecasting methods?

We are interested in comparing statistical methods with machine learning methods for univariate time series forecasting tasks. Within this predictive task we will analyse the impact of different horizons (one-step-ahead and multi-step-ahead forecasting).

3.1 Forecasting Methods

In this section, we outline the algorithms used for forecasting. We include five statistical methods and five machine learning algorithms.

3.1.1 Statistical Methods

The statistical methods used in the experiments are the following.



The Auto-Regressive Integrated Moving Average model. We use the auto.arima implementation provided in the forecast R package [Hyndman et al.2014], which controls for several time series components, including trend or seasonality;


A seasonal random walk forecasting benchmark, implemented using the snaive function available in forecast R package [Hyndman et al.2014];


The Theta method by [Assimakopoulos and Nikolopoulos2000], which is equivalent to simple exponential smoothing with drift;


The exponential smoothing state-space model typically used for forecasting [Gardner Jr1985];


An exponential smoothing state space model with Box-Cox transformation, ARMA errors, trend and seasonal components [De Livera et al.2011].

In order to apply these models, we use the implementations available in the forecast R package [Hyndman et al.2014]. This package automatically tunes the methods ETS, Tbats, and ARIMA to an optimal parameter setting.

3.1.2 Machine Learning Methods

In turn, we applied the AR(p) model with the five following machine learning algorithms.



A rule-based model from the Cubist R package [Kuhn et al.2014], which is a variant of the Model Tree [Quinlan1993];


A Random Forest method, which is an ensemble of decision trees

[Breiman2001]. We use the implementation from the ranger R package [Wright2015];


Gaussian Process regression. We use the implementation available in the kernlab R package [Karatzoglou et al.2004];


The multivariate adaptive regression splines

[Friedman and others1991] method, using the earth R package implementation [Milborrow2016];


Generalized linear model [McCullagh2019]

regression with a Gaussian distribution and a different penalty mixing. This model is implemented using the

glmnet R package [Friedman et al.2010].

These learning algorithms have been shown to present a competitive predictive performance with state of the art forecasting models [Cerqueira et al.2019]

. Other widely used machine learning methods could have been included. For example, extreme gradient boosting, or recurrent neural networks. The latter have been shown to be a good fit for sequential data, which is the case of time series. Finally, we optimized the parameters of these five models using a grid search, which was carried out using validation data. The list of parameter values tested is described in Table


ID Algorithm Parameter Value
MARS Multivar. A. R. Splines Degree {1, 2, 3}
No. terms {2,5, 7, 15}
Method {Forward, Backward}
RF Random forest No. trees {50, 100, 250, 500}
RBR Rule-based regr. No. iterations {1, 5, 10, 25, 50}
GLM Generalised Linear Regr. Penalty mixing {0, 0.25, 0.5, 0.75, 1}
GP Gaussian Processes Kernel {Linear, RBF,
Polynomial, Laplace}
Tolerance {0.001, 0.01}
Table 1: Summary of the learning algorithms

3.2 Datasets and Experimental Setup

We centre out study in univariate time series. We use a set of time series from the benchmark database tsdl [Hyndman and Yang2019]. From this database, we selected all the univariate time series with at least 1000 observations and which have no missing values. This query returned 55 time series. These show a varying sampling frequency (daily, monthly, etc.), and are from different domains of application (e.g. healthcare, physics, economics). For a complete description of these time series we refer to the database source [Hyndman and Yang2019]. We also included 35 time series used in [Cerqueira et al.2019]. Essentially, from the set of 62 used by the authors, we selected those with at least 1000 observations and which were not originally from the tsdl database (since these were already retrieved as described above). We refer to the work in [Cerqueira et al.2019] for a description of the time series. In summary, our analysis is based on 90 time series. We truncated the data at 1000 observations to make all the time series have the same size.

For the machine learning methods, we set the embedding size () to 10. Notwithstanding, this parameter can be optimized using, for example, the False Nearest Neighbours method [Kennel et al.1992]. Regarding the statistical methods, and where it is applicable, we set according to the respective implementation of the forecast R package [Hyndman et al.2014].

Regarding time series pre-processing, we follow the procedure by [Makridakis et al.2018]

. First, we start by applying the Box-Cox transformation to the data to stabilize the variance. The transformation parameter is optimized according to

[Guerrero1993]. Second, we account for seasonality. We consider a time series to be seasonal according to the test by [Wang et al.2006]. If it is, we perform a multiplicative decomposition to remove seasonality. Similarly to [Makridakis et al.2018], this process is skipped for ARIMA and ETS as they have their own automatic methods for coping with seasonality. Finally, we apply the Cox-Stuart test [Cox and Stuart1955] to determine if the trend component should be removed using first differences. This process was applied to both types of methods.

3.3 Methodology

In terms of estimation methodology, [Makridakis et al.2018] perform a simple holdout. The initial observations are used to fit the models. Then, the models are used to forecast the subsequent 18 observations.

We also set the forecasting horizon to 18 observations. However, since our goal is to control for sample size, we employ a different estimation procedure. Particularly, we use a prequential procedure to build a learning curve. A learning curve denotes a set of performance scores of a predictive model, in which the set is ordered as the sample size grows [Provost et al.1999]. Prequential denotes an evaluation procedure in which an observation is first used to test a predictive model [Dawid1984]. Then, this observation becomes part of the training set and is used to update the respective predictive model. Prequential is a commonly used evaluation methodology in data stream mining [Gama2010].

We apply prequential in a growing fashion, which means that the training set grows as observations become available after testing. An alternative to this setting is a sliding approach, where older observations are discarded when new ones become available. We start applying the prequential procedure in the 18-th observation to match the forecasting horizon. In other words, the first iteration of prequential is to train the models using the initial 18 observations of a time series. These models are then used to forecast future observations. We elaborate on this approach in the next subsections.

Following both [Hyndman and Koehler2006] and [Makridakis et al.2018]

, we use the mean absolute scaled error (MASE) as evaluation metric. We also investigated the use of the symmetric mean absolute percentage error (SMAPE), but we found a large number of division by zero problems. Notwithstanding, in our main analysis, we use the rank to compare different models. A model with a rank of 1 in a particular time series means that this method was the best performing model (with lowest MASE) in that time series. We use the rank to compare the different forecasting approaches because it is a non-parametric method, hence robust to outliers.

3.4 Results for one-step-ahead forecasting

The first set of experiments we have carried out was designed to evaluate the impact of growing the training set size on the ability of the models to forecast the next value of the time series. To emphasize how prequential was applied, we have used the following iterative experimental procedure. In the first iteration, we have learned each model using the first 18 observations of the time series. These models were then used to make a forecast for the 19-th observation. The models were ranked according to the error of these predictions. These ranks were then average over the 90 time series. On the second iteration we grow the training set to be 19 observations and repeated the forecasting exercise this time with the goal of predicting the 20-th value. The process was repeated until we cover all available series (i.e. 1000 observations).

Figure 1 shows the average rank (over the 90 time series) of each model as we grow the training set according to the procedure we have just described. Particularly, this plot represents a learning curve for each forecasting model, where the models are colored by model type (machine learning or statistical). The x-axis denotes the training sample size, i.e., how much data is used to fit the forecasting models. The y-axis represents the average rank of each model across all the 90 time series, which is computed in each testing observations of the prequential procedure. For visualization purposes, we smooth the average rank of each model using a moving average over 50 observations. The two bold smoothed lines represent the smoothed average rank across each type of method according to the LOESS method. Finally, the vertical black line at point 144 represents the maximum sample size used in the experiments by [Makridakis et al.2018].

Figure 1: Learning curve using the average rank of each forecasting method, smoothed using a moving average of 50 periods. Results obtained for one-step ahead forecasting.
Figure 2: Learning curve using the average rank of each forecasting method (Naive2 excluded), smoothed using a moving average of 50 periods. Results obtained for one-step ahead forecasting.
Figure 3: Learning curve using the MASE of each forecasting method, smoothed using a moving average of 50 periods. Results obtained for one-step ahead forecasting.

The results depicted in this figure show a clear tendency: when only few observations are available, the statistical methods present a better performance. However, as the sample size grows, machine learning methods outperform them.

Figure 2 presents a similar analysis as Figure 1. The difference is that we now exclude the statistical method Naive2. Its poor performance biased the results toward machine learning methods. Despite this, in the experiments reported by [Makridakis et al.2018], this method outperforms many machine learning methods for one-step-ahead forecasting.

Our results confirm the conclusions drawn from the experiments presented by [Makridakis et al.2018] (the vertical black line in our figures). Namely, that statistical models outperform machine learning ones, when we only consider training sets up to 144 observations as used in that paper.

Finally, Figure 3 presents a similar analysis as before using the actual MASE values. In relative terms between both types of methods, the results are consistent with the average rank analysis. This figure suggests that both types of methods improve their MASE score as the training sample size increases.

3.4.1 Results by Individual Model

The average rank graphics of each model depicted in Figure 1 show a score for each model in each of the testing observations of the prequential procedure. We average this value across the testing observations to determine the average of average rank of each individual model. The results of this analysis are presented in Table 2. The method RBR presents the best score.

5.11 5.26 5.39 5.39 5.39 5.45 5.47 5.54 5.73 6.28
Table 2: Average of the average rank of each model across each testing observation in the prequential procedure (one-step ahead forecasting).

3.5 Results for multi-step-ahead forecasting

In order to evaluate the multi-step ahead forecasting scenario, we used a similar approach to the one-step ahead setting. The difference is that, instead of predicting only the next value, we now predict the next 18 observations. To be precise, in the first iteration of the prequential procedure, each model is fit using the first 18 observations of the time series. These models were then used to make a forecast for the next 18 observations (from the 19-th to the 36-th). As before, the models were ranked according to the error of these predictions (quantified by MASE). On the second iteration, we grow the training set to be 19 observations, and the forecasting exercise is repeated.

For machine learning models, we focus on an iterated (also known as recursive) approach to multi-step forecasting [Taieb et al.2012]. Initially, a model is built for one-step-ahead forecasting. To forecast observations beyond one point (i.e., ), the model uses its predictions as input variables in an iterative way. We refer to the work by [Taieb et al.2012] for an overview and analysis of different approaches to multi-step forecasting.

Figure 4: Learning curve using the smoothed average rank of each forecasting method for . Results for multi-step ahead forecasting.
Figure 5: Learning curve using the smoothed average rank of each forecasting method for , Naive2 excluded.
Figure 6: Learning curve using the MASE of each forecasting method and for , smoothed using a moving average of 50 periods.

Figures 4 and 5 show the average (over the 90 time series) rank of each model as we grow the training set according to the procedure we have just described. The figures are similar, but the results of the Naive2 method were excluded from the second one using the same rationale as before.

Analyzing the results by model type, the main conclusion in this scenario is similar to the one obtained in the one-step ahead setting: according to the smoothed lines, statistical methods are only better than machine learning ones when the sample size is small. In this setting, however, the two types of methods seem to even out as the training sample size grows (provided we ignore Naive2). According to Figure 6, the MASE score of the models in this scenario are considerably higher than that of the one-step ahead forecasting setting. This is expected given the underlying increased uncertainty.

4.77 4.95 4.97 5.11 5.24 5.43 5.69 5.74 6.12 6.98
Table 3: Average of the average rank of each model across each testing observation in the prequential procedure (multi-step ahead forecasting).

Table 3 presents the results by individual model in a similar manner to Table 2 but for multi-step forecasting. In this setting, the model with best score is ARIMA.

3.6 Computational Complexity

In the interest of completeness, we also include an analysis of the computational complexity of each method. We evaluate this according to the computational time spent by a model, which we define as the time a model takes to complete the prequential procedure outlined in Section 3.3. Similarly to [Makridakis et al.2018], we define the computational complexity (CC) of a model as follows:


Essentially, we normalize the computational time of each method by the computational time of the Naive2 method.

The results are shown in Figure 7 as a bar plot. The bars in the graphic are log scaled. The original value before taking the logarithm is shown within each bar.

Figure 7: Computational complexity of each method relative to the Naive2 benchmark model (log scaled)

From the figure, the method with worse CC is ARIMA, followed by Tbats. The results are driven by the fact that the implementations of the statistical methods (except Naive2 and Theta) include comprehensive automatic parameter optimization in the forecast R package. The optimization of machine learning methods carried out in our experiments was not as exhaustive. Therefore, the CC of these methods is not as high as the one shown by ARIMA and Tbats.

4 Discussion

In the previous section, we carried a simple experiment to show that sample size matters when comparing machine learning methods with statistical methods for univariate time series forecasting. In this section, we discuss these results.

4.1 Experimental Setup

We applied the statistical methods using the widely used R package forecast [Hyndman et al.2014]. This package contains implementations that search for the best parameter settings of the models, and is considered a software package for automated time series forecasting models [Taylor and Letham2018]. Regarding the machine learning models, we focused on building AR(p) models using these learning algorithms. Other approaches could be analyzed, for example coupling an AR(p) model with a recurrent architecture [Dietterich2002] or with summary statistics.

Our choice of machine learning algorithms was driven by a recent study [Cerqueira et al.2019], but other learning algorithms could be applied. For example, we did not apply any neural network, which have been successfully applied to sequential data problems [Chung et al.2014]. Our goal was to show that machine learning in general is a valid approach to time series forecasting, even without an extensive model selection procedure nor a parameter tuning process as extensive as the one used by the forecast package.

4.2 One-step vs Multi-step Forecasting

The results obtained from the multi-step forecasting scenario are different from those obtained from the one-step ahead scenario. Particularly, machine learning methods do not gain the upper hand in terms of predictive performance when we increase the sample size.

Some possible reasons are the following: since multi-step forecasting represents a task with an increased uncertainty [Weigend2018], machine learning methods may need more data to cope with this issue – in future work, we will continue the learning curve to assess this possibility; a different approach to multi-step forecasting may be better – we focused on an iterative approach, but other possibilities are available in the literature [Taieb et al.2012].

4.3 On Sample Size and the No Free Lunch Theorem

Our main claim in this work is that sample size matters when comparing different forecasting models. We backed this claim using 90 time series comprised of 1000 observations. We do not claim that this number is the optimal sample size for fitting forecasting models. It is difficult to tell apriori the amount of data necessary to solve a predictive task. It depends on many factors, such as the complexity of the problem or the complexity of the learning algorithm.

We believe that the work by [Makridakis et al.2018] is biased towards small, low frequency, datasets. Naturally, the evidence that machine learning models are unable to generalize from small datasets can be regarded as a limitation relative to traditional statistical ones. However, machine learning can make an important impact in larger time series. Technological advances such as the widespread adoption of sensor data enabled the collection of large, high frequency, time series. This occurrence lead to new forecasting problems, where machine learning models have been shown to be useful [Voyant et al.2017, Xingjian et al.2015].

Finally, we also remark that, even with a large amount of data, it is not obvious that a machine learning method would outperform a statistical method. This reasoning is in accordance with the No Free Lunch theorem [Wolpert1996], which states that no learning algorithm is the most appropriate in all scenarios. The same rationale can be applied to small datasets.

5 Final Remarks

Makridakis claims that machine learning practitioners “working on forecasting applications need to do something to improve the accuracy of their methods” [Makridakis et al.2018]. We claim that these practitioners should start by collecting as much data as possible. Moreover, it is also advisable for practitioners to include both types of forecasting methods in their studies to enrich the experimental setup.

The code to reproduce the experiments carried out in this paper can be found at


The work of V. Cerqueira was financially supported by Fundação para a Ciência e a Tecnologia (FCT), the Portuguese funding agency that supports science, technology, and innovation, through the Ph.D. grant SFRH/BD/135705/2018. The work of L. Torgo was undertaken, in part, thanks to funding from the Canada Research Chairs program.


  • [Ahmed et al.2010] Nesreen K Ahmed, Amir F Atiya, Neamat El Gayar, and Hisham El-Shishiny. An empirical comparison of machine learning models for time series forecasting. Econometric Reviews, 29(5-6):594–621, 2010.
  • [Assimakopoulos and Nikolopoulos2000] Vassilis Assimakopoulos and Konstantinos Nikolopoulos. The theta model: a decomposition approach to forecasting. International journal of forecasting, 16(4):521–530, 2000.
  • [Box et al.2015] George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysis: forecasting and control. John Wiley & Sons, 2015.
  • [Breiman2001] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
  • [Carbonneau et al.2008] Real Carbonneau, Kevin Laframboise, and Rustam Vahidov. Application of machine learning techniques for supply chain demand forecasting. European Journal of Operational Research, 184(3):1140–1154, 2008.
  • [Cerqueira et al.2019] Vitor Cerqueira, Luís Torgo, Fábio Pinto, and Carlos Soares. Arbitrage of forecasting experts. Machine Learning, 108(6):913–944, 2019.
  • [Chatfield2000] Chris Chatfield. Time-series forecasting. CRC Press, 2000.
  • [Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
  • [Cox and Stuart1955] David Roxbee Cox and Alan Stuart. Some quick sign tests for trend in location and dispersion. Biometrika, 42(1/2):80–95, 1955.
  • [Dawid1984] A Philip Dawid. Present position and potential developments: Some personal views statistical theory the prequential approach. Journal of the Royal Statistical Society: Series A (General), 147(2):278–290, 1984.
  • [De Livera et al.2011] Alysha M De Livera, Rob J Hyndman, and Ralph D Snyder. Forecasting time series with complex seasonal patterns using exponential smoothing. Journal of the American Statistical Association, 106(496):1513–1527, 2011.
  • [Dietterich2002] Thomas G Dietterich. Machine learning for sequential data: A review. In

    Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR)

    , pages 15–30. Springer, 2002.
  • [Friedman and others1991] Jerome H Friedman et al. Multivariate adaptive regression splines. The annals of statistics, 19(1):1–67, 1991.
  • [Friedman et al.2010] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, 2010.
  • [Gama2010] Joao Gama. Knowledge discovery from data streams. Chapman and Hall/CRC, 2010.
  • [Gardner Jr1985] Everette S Gardner Jr. Exponential smoothing: The state of the art. Journal of forecasting, 4(1):1–28, 1985.
  • [Guerrero1993] Víctor M Guerrero. Time-series analysis supported by power transformations. Journal of Forecasting, 12(1):37–48, 1993.
  • [Hill et al.1996] Tim Hill, Marcus O’Connor, and William Remus. Neural network models for time series forecasts. Management science, 42(7):1082–1092, 1996.
  • [Hyndman and Athanasopoulos2018] Rob J Hyndman and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2018.
  • [Hyndman and Koehler2006] Rob J Hyndman and Anne B Koehler. Another look at measures of forecast accuracy. International journal of forecasting, 22(4):679–688, 2006.
  • [Hyndman and Yang2019] Rob Hyndman and Yangzhuoran Yang. tsdl: Time Series Data Library, 2019.,
  • [Hyndman et al.2014] Rob J Hyndman, with contributions from George Athanasopoulos, Slava Razbash, Drew Schmidt, Zhenyu Zhou, Yousaf Khan, Christoph Bergmeir, and Earo Wang. forecast: Forecasting functions for time series and linear models, 2014. R package version 5.6.
  • [Karatzoglou et al.2004] Alexandros Karatzoglou, Alex Smola, Kurt Hornik, and Achim Zeileis. kernlab – an S4 package for kernel methods in R. Journal of Statistical Software, 11(9):1–20, 2004.
  • [Kennel et al.1992] Matthew B Kennel, Reggie Brown, and Henry DI Abarbanel. Determining embedding dimension for phase-space reconstruction using a geometrical construction. Physical review A, 45(6):3403, 1992.
  • [Kilian and Taylor2003] Lutz Kilian and Mark P Taylor. Why is it so difficult to beat the random walk forecast of exchange rates? Journal of International Economics, 60(1):85–107, 2003.
  • [Kuhn et al.2014] Max Kuhn, Steve Weston, Chris Keefer, and Nathan Coulter. C code for Cubist by Ross Quinlan. Cubist: Rule- and Instance-Based Regression Modeling, 2014. R package version 0.0.18.
  • [Lee and Mark2010] Joon Lee and Roger G Mark. An investigation of patterns in hemodynamic data indicative of impending hypotension in intensive care. Biomedical engineering online, 9(1):62, 2010.
  • [Makridakis and Hibon2000] Spyros Makridakis and Michele Hibon. The m3-competition: results, conclusions and implications. International journal of forecasting, 16(4):451–476, 2000.
  • [Makridakis et al.2018] Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. Statistical and machine learning forecasting methods: Concerns and ways forward. PloS one, 13(3):e0194889, 2018.
  • [McCullagh2019] Peter McCullagh. Generalized linear models. Routledge, 2019.
  • [Michalski et al.2013] Ryszard S Michalski, Jaime G Carbonell, and Tom M Mitchell. Machine learning: An artificial intelligence approach. Springer Science & Business Media, 2013.
  • [Milborrow2016] S. Milborrow. earth: Multivariate Adaptive Regression Splines, 2016. R package version 4.4.4.
  • [Provost et al.1999] Foster Provost, David Jensen, and Tim Oates. Efficient progressive sampling. In Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 23–32. ACM, 1999.
  • [Quinlan1993] J Ross Quinlan. Combining instance-based and model-based learning. In Proceedings of the tenth international conference on machine learning, pages 236–243, 1993.
  • [Taieb et al.2012] Souhaib Ben Taieb, Gianluca Bontempi, Amir F Atiya, and Antti Sorjamaa. A review and comparison of strategies for multi-step ahead time series forecasting based on the nn5 forecasting competition. Expert systems with applications, 39(8):7067–7083, 2012.
  • [Takens1981] Floris Takens. Dynamical Systems and Turbulence, Warwick 1980: Proceedings of a Symposium Held at the University of Warwick 1979/80, chapter Detecting strange attractors in turbulence, pages 366–381. Springer Berlin Heidelberg, Berlin, Heidelberg, 1981.
  • [Taylor and Letham2018] Sean J Taylor and Benjamin Letham. Forecasting at scale. The American Statistician, 72(1):37–45, 2018.
  • [Voyant et al.2017] Cyril Voyant, Gilles Notton, Soteris Kalogirou, Marie-Laure Nivet, Christophe Paoli, Fabrice Motte, and Alexis Fouilloy. Machine learning methods for solar radiation forecasting: A review. Renewable Energy, 105:569–582, 2017.
  • [Wang et al.2006] Xiaozhe Wang, Kate Smith, and Rob Hyndman. Characteristic-based clustering for time series data. Data mining and knowledge Discovery, 13(3):335–364, 2006.
  • [Weigend2018] Andreas S Weigend. Time series prediction: forecasting the future and understanding the past. Routledge, 2018.
  • [Wolpert1996] David H Wolpert. The lack of a priori distinctions between learning algorithms. Neural computation, 8(7):1341–1390, 1996.
  • [Wright2015] Marvin N. Wright. ranger: A Fast Implementation of Random Forests, 2015. R package.
  • [Xingjian et al.2015] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems, pages 802–810, 2015.