NeuralProphet: Explainable Forecasting at Scale

11/29/2021
by   Oskar Triebe, et al.
0

We introduce NeuralProphet, a successor to Facebook Prophet, which set an industry standard for explainable, scalable, and user-friendly forecasting frameworks. With the proliferation of time series data, explainable forecasting remains a challenging task for business and operational decision making. Hybrid solutions are needed to bridge the gap between interpretable classical methods and scalable deep learning models. We view Prophet as a precursor to such a solution. However, Prophet lacks local context, which is essential for forecasting the near-term future and is challenging to extend due to its Stan backend. NeuralProphet is a hybrid forecasting framework based on PyTorch and trained with standard deep learning methods, making it easy for developers to extend the framework. Local context is introduced with auto-regression and covariate modules, which can be configured as classical linear regression or as Neural Networks. Otherwise, NeuralProphet retains the design philosophy of Prophet and provides the same basic model components. Our results demonstrate that NeuralProphet produces interpretable forecast components of equivalent or superior quality to Prophet on a set of generated time series. NeuralProphet outperforms Prophet on a diverse collection of real-world datasets. For short to medium-term forecasts, NeuralProphet improves forecast accuracy by 55 to 92 percent.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/01/2022

An Interpretable Probabilistic Autoregressive Neural Network Model for Time Series Forecasting

Forecasting time series data presents an emerging field of data science ...
12/16/2020

Series Saliency: Temporal Interpretation for Multivariate Time Series Forecasting

Time series forecasting is an important yet challenging task. Though dee...
12/15/2021

Optimal Latent Space Forecasting for Large Collections of Short Time Series Using Temporal Matrix Factorization

In the context of time series forecasting, it is a common practice to ev...
09/18/2020

Explainable boosted linear regression for time series forecasting

Time series forecasting involves collecting and analyzing past observati...
11/30/2018

Deep Factors with Gaussian Processes for Forecasting

A large collection of time series poses significant challenges for class...
05/28/2019

Deep Factors for Forecasting

Producing probabilistic forecasts for large collections of similar and/o...
02/24/2022

Robust Probabilistic Time Series Forecasting

Probabilistic time series forecasting has played critical role in decisi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Background

Time series data is prominent in most industrial sectors. Though extensively studied in theory and applications such as economics, practical forecasting in industrial applications has not received widespread attention until recently.

Since many companies have significantly improved their data collection to the extent where data availability now exceeds their data analytic capabilities, how to process and forecast industrial time series is quickly becoming an important topic. Often the person tasked with solving a particular issue is not a time series expert but rather a practitioner forecaster.

Accurate time series forecasting is essential for decision-making processes, especially in infrastructure planning, budget allocation and supply chain management. Over-forecasts as well as under-forecasts can be both be costly. In applications, where forecasts inform business or operational decisions with potentially fatal consequences, it is often a requirement for the model to be explainable. A common approach is to break the forecast down into components which are individually interpretable. This allows for the forecast to be reviewed by a domain expert who may make adjustments when appropriate, a procedure known as human-in-the-loop.

Classical Time Series Methods

The field of forecasting was traditionally dominated by statistical techniques. This was most notable on many of the early forecasting competitions such as NN3, NN5 and M3 (Makridakis and Hibon, 2000; Crone, 2008). Classical models such as Auto-Regressive Integrated Moving Average (ARIMA) and Exponential Smoothing (ETS) have been well studied and provide interpretable components. However, their restrictive assumptions and parametric nature limit their performance in real world applications. A skillful forecasting expert can transform data and combine algorithms to satisfy specific conditions for better performance. This requires deep domain knowledge in the application itself and in classical time series modelling.

Machine Learning Methods

Machine Learning (ML) based models were constantly performing poorly at the early forecasting competitions. Neural Networks (NN) were once even labeled as not being competitive for forecasting (Hyndman, 2018). They were further criticized in literature for their black-box nature, as in Makridakis et al. (2018)

. However, with the explosion in availability of large scale time series, NN-based data-driven techniques regained their popularity in forecasting. The amount of data available is no longer insufficient for ML and Deep Learning (DL) techniques to train well. Nevertheless, explainability of these models remains mostly an open research problem in the field of forecasting. Further, they often require substantial engineering efforts to preprocess data and fine-tune hyperparameters.

Consequently, most non-expert forecasting practitioners involved in different industries do not use the most accurate state-of-the-art models for their particular task. Rather, they are more interested in finding a reasonably accurate model, subject to explainability, scalability and minimal tuning. These are commonly seen as requirements for forecasting applications in industry.

Practitioners tend to opt for traditional statistical techniques which are user-friendly and interpretable in functionality, despite their poor forecasting performance. Hence, new methods need to be developed which can bridge this gap between classical time series modelling and ML-based methods.

Hybrid Methods

A precursor to hybrid methods, Facebook Prophet (Taylor and Letham, 2017), provides an interpretable model which scales to many forecasting applications. The forecasting framework provides full automation to novices and fine-tuning capabilities to domain experts. Prophet remains one of the first forecasting packages which a data scientist will use, and also one of the few which they often continue to use as their skills grow. It has made classic time series forecasting approachable and useful to a wide demographic.

Yet, its limitations around key features, such as the lack of local context, and extensibility have presented challenges for users. The lack of local context, which is essential for forecasting the near-term future, has restricted the usefulness of Prophet in industrial applications. Because Prophet was built on top of Stan (Carpenter et al., 2017), a probabilistic programming language, it is difficult to extend the original forecasting library.

Our Contribution

Inspired by Prophet’s impact on the forecasting community, it is our objective to continue the democratization of forecasting tooling, by making hybrid models as accessible as Prophet made classic models. As a step towards truly hybrid methods, we introduce NeuralProphet, a user-friendly and interpretable forecasting tool built on PyTorch (Paszke et al., 2019). NeuralProphet fuses the classic time series components introduced by the Prophet package with NN modules into a hybrid model enabling it to fit to non-linear dynamics. Two such NN components are the auto-regression and covariate modules. By using PyTorch as the backend, NeuralProphet can be updated with the latest innovation in deep learning. We provide a convenient tool for non-experts in forecasting by pre-selecting strong defaults and automating many of the modelling decisions. Advanced users may also find the package useful as they can incorporate domain knowledge with many customizaiton options. Overall, NeuralProphet abstracts a lot of forecasting domain knowledge and incorporates ML best practices so users can focus on the task at hand.

in this paper

We introduce the NeuralProphet model and its components in detail and compare it to Prophet. The methods section describes the experimental setup and evaluation. The synthetic data results quantify the accuracy of the decomposed interpretable forecast components. The real-world benchmarks demonstrate the realistic out-of-the-box forecasting performance on a diverse set of univariate time series. Finally, we summarize the findings and provide suggestions on which model to use depending on the task at hand.

2 The NeuralProphet Model

2.1 Model Components

A core concept of the NeuralProphet model is its modular composability. The model is composed of modules which each contribute an additive component to the forecast. Most components can also be configured to be scaled by the trend for a multiplicative effect. Each module has its individual inputs and modelling processes. However, all modules must produce outputs, where defines the number of steps to be forecasted into the future at once. These are added up as the predicted values for the time series future values . If the model is only time-dependent, an arbitrary number of forecasts can be produced. In the following descriptions, that special case will be treated mathematically equivalent to a one-step ahead forecast with .

(1)

where,

All model component modules can be individually configured and combined to compose the model. If all modules are switched off, only a static offset parameter is fitted as the trend component. By default, only the trend and seasonality modules are activated. The full model is summarized in equation 1

In the following subsections we discuss each of the components in more detail.

2.1.1 Trend

A classic approach to modelling trend is to model it as the combination of an offset and a growth rate . The trend effect at a time is given by multiplying the growth rate by the difference in time since the starting point on top of the offset .

(2)

NeuralProphet models the trend component with this classic approach, but allows the growth rate to change at a number of locations. Thus, the trend is modelled as a continuous piece-wise linear series. This results in an interpretable, yet non-linear form of trend modelling. It is simple to interpret, as in a segment between two points, the trend effect is given by the steady growth rate multiplied by the difference in time. We can generalize the trend by defining a time-dependent growth rate and a time-dependent offset .

The piece-wise linear trend only varies the growth rate at a finite number of changepoints. A set of changepoints are defined at different times as . Between changepoints, the trend growth rate is kept constant. The first segment’s growth rate and offset are given as and

respectively. Rate adjustments at each changepoint can be defined as a vector

, where is the rate change at the change-point. The growth rate at time is determined by adding the initial growth rate with the summation of the rate adjustments at all the change-points up to time step t. Each growth rate change is a parameter to be fitted on the data. A corresponding vector of offset adjustments can be defined as . Similarly, the offset at time is given by the initial offset and the sum of the offset adjustments at each change-point up to time . However, the offset for a changepoint is not an independent parameter but defined as . This particular offset definition makes the piece-wise linear series continuous. Further, we introduce a binary vector representing whether the time is past each changepoint. Thus, we can define a vectorized equation for the trend at time , which is given in equation 3:

(3)

where,

NeuralProphet provides a simple, semi-automatic mechanism for the selection of relevant change-points. Given the number of desired change-points, equidistant points along the series are selected as initial changepoints. Optionally, their growth change rate parameters can be regularized during model training. This is is similar to a fully automatic changepoint selection, as only the most relevant changepoints will be selected, if any. The user can also opt to manually define the specific times of a custom number of changepoints. To avoid overfitting on a small number of final points, the final trend segment (after the last changepoint) is set to a larger set of observations (default: 15 % of training data). When making predictions into the unobserved future, the final growth rate is used to linearly extrapolate the trend.

2.1.2 Seasonality

Seasonality in NeuralProphet is handled by using Fourier terms (Harvey and Shephard, 1993) as was originally done in Prophet (Taylor and Letham, 2017). In this technique, a number of Fourier terms are defined for each seasonality as in Equation 4, where refers to the number of Fourier terms defined for the seasonality with periodicity . Fourier terms are defined as sine, cosine pairs and allow to model multiple seasonalities as well as seasonalities having non-integer periodicities such as yearly seasonality with daily data () or with weekly data (). In a multiple seasonality scenario, different values for can be defined for each periodicity.

(4)

Fourier terms are a great tool for modelling seasonality as they produce smooth functions which are simple to interpret and stable to fit to data. However, Fourier terms only model deterministic seasonal shapes which are assumed to be fixed through time. A higher number of Fourier terms allows the model to fit a more complex seasonal pattern. Too much flexibility may lead to overfitting or to random patterns between observations. Thus, each Fourier term corresponds to a frequency proportional to , modelled by by a weighted combination of a sine and cosine transform. Every seasonality is associated with number of coefficients. For time step , the effect from all the seasonalities considered in the model can be indicated by in below Equation 5, where refers to the set of all the periodicities.

(5)

Both additive and multiplicative seasonal patterns are supported. Each seasonal periodicity can individually be configured to be multiplicative, in which case the component is multiplied by the trend.

The framework automatically activates daily, weekly and or yearly seasonality depending on data frequency and length. Each of these three types of seasonal periodicities is activated if the data frequency is higher resolution than the respective periodicity, and if at least two full periods of data are available. As an example, if the data is of daily frequency, the model will enable yearly seasonality if the data spans two years or more. Weekly seasonality will also be added if two or more weeks of data are available. Daily seasonality will not be activated, as the daily frequency is not of higher resolution to allow for intra-day seasonality. Unless otherwise specified, the default number of Fourier terms per seasonality are: for yearly, for weekly, and for daily seasonality.

2.1.3 Auto-Regression

Auto-regression (AR) refers to the process of regressing a variable’s future value against its past values, a critical part of many forecasting applications as can be seen in Hyndman and Athanasopoulos (2014). The number of past values included is usually referred to as the order of the AR model. Hereby, a coefficient is fitted for each past value. Each coefficient controls the direction and magnitude of effect of a particular past value on the forecast. In a classic AR process, an intercept

and a white noise term

are included.

In many applications, we are interested to predict multiple steps into the future. We refer to the number of forecasts made at once as the forecast horizon . A traditional AR model will only produce one prediction for a one-step-ahead forecast with . If a multi-step ahead forecast of length is demanded, models will have to be fitted, one for each forecast step. The AR module in NP is based on a modified version of AR-Net in Triebe et al. (2019). AR-Net is able to produce all forecasts with one model, which can be of linear or non-linear nature. In any configuration, the last observations of the target variable , also referred to as lags, are the inputs to the module. The outputs are values corresponding to the AR-effect for each forecast step . Hereby, denotes the predicted AR-effect for forecast at , three steps in the future, predicted at forecast origin with observed data up to and including .

(6)

It is important to note, that each time we forecast at a specific origin, we obtain predictions. Thus, for a given point in time, we have up to different predictions, each originating from a different forecast made in the past. They differ from each other based on the data available to the model at the time of the forecast. As an example, when we use an model to forecast steps into the future, at a given time , we will have 3 predicted values of the AR-effect for . These three effects

are each an estimation of the true AR-effect

, with different ’age’. The oldest estimate is 3 steps old, while the most recent estimate is one step old. With Auto-Regression, any estimate will be at least one step old, as we assume to never know the true value of the series at the current time but only the observed value at the last step and older.

AR order

The most important parameter for this module is the number of past values to be regressed over, also known as the order of the AR model. This parameter should be chosen based on the approximate length of relevant context in past observations. In practice, it is hard to determine accurately and is commonly set to twice the innermost periodicity or twice the forecast horizon. Alternatively, a conservatively large order can be chosen when used in combination with regularization to obtain a sparse AR-model.

Linear AR

The default AR-Net configuration does not contain hidden layers, and is functionally identical to a classic AR model. In practice, it is a single layer NN with inputs,

outputs, no biases, and no activation function. The single layer weights each regress a particular lag onto a particular forecast step. Thus, each weight can be matched to a corresponding coefficient of a collection of

classic AR() models, making it simple to interpret the model.

where,

We can denote the model as a vector-matrix multiplication for the predicted AR effects , with lagged observations as input , and with weight matrix , where denotes the coefficent defining the linear impact of lag on AR-effect

Deep AR

AR-Net based AR module can model non-linear dynamics when hidden layers are configured. In this case, the module trains a fully connected Neural Network (NN) with the specified number of hidden layers and dimensions. The addition of hidden layers may lead to a better forecasting accuracy, however it a partial trade-off in interpretability. Instead of being able to directly quantify the contribution of a particular past observation to a particular prediction, we can only observe the relative importance of a given past observation on all predictions. This can be approximated by comparing the sums of the absolute weights of the first layer for each input position.

The

last observations of the time series are the inputs to the first layer. After each hidden layer, the logits are passed through an activation function, in our case, a rectified linear unit (ReLU). The final layer outputs

logits, are not transformed by an activation function and have no bias. For hidden layers with a hidden layer dimension of size , we have:

where,

Hereby, the layer biases all have the same dimensions , while the layer weights are , except for the first and last layer weights.

Sparse AR

AR-Net demonstrated that the correct order can be approximated by setting the order to a slightly larger than expected value when regularization is used to sparsify the model weights. This allows for a more convenient model configuration while retaining interpretability.

The regularization function proposed by the authors of AR-Net is given in equation 8. Though we make this regularization function available, by default, our implementation uses a different regularization function, which we found to work better for a wider range of data. We use the regularization function introduced in equation 14 with parameters and , where is the vector of weights corresponding to the AR model coefficients:

(7)

where,

As an example, when forecasting the next 24 steps of an hourly time series, the forecaster could choose to set the AR order to 100 with some regularization. After fitting, the AR module may then exhibit sparse coefficients with the most significant weights located on a few positions only. In this example, only positions 1, 2, 3, 24, 48, and 72 could have non-zero weights. These positions can be interpreted as the locations with an auto-correlation, and further, their weights can be examined to study how different combinations of past observation values affect the forecast.

For reference, the original regularization function proposed in AR-Net is given by:

(8)

where the authors suggest setting and .

2.1.4 Lagged Regressors

Lagged regressors are used to correlate other observed variables to our target time series. They are often referred to as covariates. Unlike future regressors, the future of lagged regressors is unknown to us. At the time of forecasting, we only have access to their observed, past values up to and including .

(9)

Given a set of covariates , we create a separate lagged regressor module for each of the covariates of lenght . This allows to individually attribute the effect of each covariate on the predictions. Each lagged regressor module is functionally identical to the AR module, with the only difference being the inputs. Here, the last observations of the covariate are the inputs to the module (instead of the series itself as in AR). The outputs are of identical form, each module producing additive components to the overall forecasts .

(10)

All considerations regarding the order, depth, sparsification, and interpretability of the AR-Net modules for each covariate are identical to the AR module, as discussed in section 2.1.3, if we substitute:

2.1.5 Future Regressors

To model future regressors, both past and future values of these regressors have to be known. Given the set of future regressors as , where is the number of regressors, the effect from all future regressors at time step can be denoted by as in Equation 11, where stands for the coefficient of the model for future regressor . By default, future regressors have an additive effect, which can be configured to be multiplicative instead.

(11)

where,

2.1.6 Events and Holidays

Effects from special events or holidays may occur sporadically. Such events are modelled analogous to future regressors, with each event

as a binary variable

, signaling whether the event occurs on the particular day or not. For a set of events with number of events and the length of the series , the effect from all events at time step can be denoted by in Equation 12, where denotes the coefficient of the model corresponding to the event .

(12)

where,

NeuralProphet allows the modelling of two types of events; 1) user defined 2) country specific holidays. Given a country name, its national holidays are automatically retrieved and added to the set of events . Similar to seasonal effects, events can also be specified as either additive or multiplicative.

Additionally, for a given event at time , a window of days can be configured to be considered as special events of their own. For example, by setting a window of for Christmas day, will allow the day before Christmas to have its own effect on the forecast. Hereby, a new variable is created for each day within the window around event and added to the set of events .

2.2 Preprocessing

2.2.1 Missing Data

Missing data is less of an issue when working with non-lagged input variables, as corresponding timesteps can simply be dropped. In doing so, one data sample will be lost per missing entry. With Auto-regression or lagged regressor modules in use, a missing data point would lead to data samples being dropped due to missing forecast targets and missing lag values. For example, a single missing point leads to the loss of 13 samples for a model forecasting steps ahead with AR(

). Thus, we implemented a data imputation mechanism to avoid excessive data loss when working with incomplete data. The imputation mechanism follows the following heuristics:

Data Imputation

If not specified or missing, events are assumed to not be happening. Missing Events are filled in with zeros, indicating their absence. All other real-valued regressor variables, including the time series itself, if autoregression is enabled, are imputed in a three step procedure.

First

The missing values are approximated by a bi-directional linear interpolation. Hereby, the last and the first known value before and after the missing values are used as anchor points for the interpolation. This is done for up to 5 missing values in each direction. If there are more than 10 missing values, they will remain

NAN after this step. The amount of missing values to interpolate is user-settable.

Second

The remaining missing values are imputed with a centred rolling average. The rolling average is computed over a window of 30, and fills at most 20 consecutive missing values. The amount of missing values to fill with a rolling average is user-settable.

Third

If there are more than 30 consecutive missing values, the imputation algorithm aborts and instead drops all the missing datapoints.

2.2.2 Data Normalization

The type of normalization to apply to the time series can be set by the user. The available options are described in table 1. If not specified, or set to ’auto’, the default option ’soft’ is used, unless the time series values are binary, in which case ’minmax’ is applied.

Name Normalization Procedure
’auto’ ’minmax’ if binary, else ’soft’
’off’ bypasses data normalization
’minmax’ scales the minimum value to 0.0 and the maximum value to 1.0
’standardize’

zero-centers and divides by the standard deviation

’soft’

scales the minimum value to 0.0 and the 95th quantile to 1.0

’soft1’ scales the minimum value to 0.1 and the 90th quantile to 0.9
Table 1: The available data normalization options.

2.2.3 Tabularization

We tabularize the time series data to create a pseudo independent and identically distributed dataset, as is required for SGD based training. Hereby, a data sample is created for each available time-stamp of the target time series ’y’. A sample includes a normalized time-stamp, and all inputs required by each configured module.

The modules Auto-regression and lagged covariates hereby extract the values before a given time-stamp from their respective time series and store them in a vector of size serving as input to the model. If forecasting multiple steps ahead with , the forecast targets also are stored in a -sized vector for each time-stamp.

This is not a memory-efficient approach to prepare the dataset to training. Hoever, we chose this procedure due to its simplicity and for its compute-efficiency when training.

Analogous, the Fourier terms are prepared by computing the different sine and cosine transforms of the standardized time component, for each of the configured seasonal periodicities.

2.3 Training

One of the fundamental differences between Prophet and NeuralProphet is the fitting procedure. Prophet utilizes L-BFGS (Nocedal and Wright, 2006), implemented in Stan (Carpenter et al., 2017) for fitting model parameters to the data. NeuralProphet retools Prophet, from the bottom up, replacing Stan with PyTorch, which is both flexible and easy to use.

NeuralProphet relies on a modern version of mini-batch stochastic gradient descent (SGD). This fitting procedure is compatible with the vast majority of deep learning methods, can be scaled reliably to large datasets, and can accommodate more complex model components. Any model component which is trainable by SGD can be included as a module. This makes it easy for developers to extend the framework with new features, and to adopt new research.

2.3.1 Loss Function

The default loss function is the Huber loss, also known as smooth L1-loss, which can be seen in equation 

13. Below a given threshhold , it is equivalent to mean squared error (MSE). For values larger than

it is equivalent to the mean absolute error (MAE). Compared to a pure MSE loss, it is less sensitive to outliers and may help prevent exploding gradients 

Girshick (2015). We chose as threshold. Users can however choose to alternatively opt for MSE or MAE loss, or any other available or self-implemented function matching the PyTorch loss function format as their loss function.

(13)

2.3.2 Regularization

We use a scaled and shifted log-transform of the absolute weight values as general regularization function. For a module with model weights it is parametrized as follows:

(14)

where sets the inverse of the offset and sets the scaling of the log-transform. Higher values of lead to a steeper curve for weights close to zero. The parameter could be described as a control of the eagerness to sparsify weights. Higher values of lead to a flatter curve for weights of larger magnitude. The parameter can be interpreted as a control of the acceptance of large weights. As default values, we suggest and for most applications.

The regularization is applied in the per-module configured strength and added to the loss function, to be back-propagated. The regularization only commences after a specified percentage of training, by default, after , and is thereafter linearly increased from zero to its full configured strength at the end of training.

2.3.3 Optimizer

Analogous to the loss function, any optimizer matching the signature of a PyTorch optimizer can be configured. As a reliable default, the AdamW optimizer (Loshchilov and Hutter, 2019) is used, unless otherwise specified. In our testing we observed AdamW to fit most reliably, with a slight tendency to overfit. We initialize AdamW with the configured learning rate, and set , , and a weight decay of .

As a fallback option, we also provide a classic SGD optimizer with momentum set to 0.9, and weight decay set to . In our testing, we found SGD to lead to better validation performance, but at the expense of occasional divergence. If a user is willing to fine-tune training related hyperparameters, SGD poses an attractive alternative to AdamW.

2.3.4 Learning Rate

As we do not want to require users to be experts in machine learning, we make the learning rate, an essential hyperparameter for any NN, optional. We achieve this by relying on a simple but reasonably effective approach to estimate a learning rate, introduced as a learning rate range test in Smith (2017).

A learning rate range test is executed for iterations, starting at , ending at with the configured batch-size. After each iteration, the learning rate is increased exponentially until the final learning rate is reached in the last iteration. Excluding the first 10 and last 5 iterations, the steepest learning rate is defined as the resulting learning rate. The steepest learning rate is found by selecting the learning rate at the position which maximizes the negative gradient of the losses.

In order to increase reliability, we perform the the test three times and take the log10-mean of the three runs as the selected learning rate :

2.3.5 Batch Size

The batch size is an optional parameter. If not user specified, the following heuristic will determine the batch size based on the length of the dataset:

2.3.6 Training Epochs

The number of training epochs

is an optional parameter. If not user specified, the following heuristic will determine the number of epochs :

2.3.7 Scheduler

As all training related hyperparameters are automatically approximated, none of them are optimal, and may thus potentially lead to training issues. Nonetheless, a user expects the model to be reasonably fitted to the data without issues. In order to meet these expectations despite suboptimal hyperparameters, we rely on the training schedule called ’1cycle’ policy, which allows for ’superconvergence’ of NN training, according to Smith and Topin (2018). Hereby, the initial learning rate is gradually increased up to the peak learning rate , reached at of training. Thereafter, the learning rate is annealed along a cosine curve down to at the end of training.

2.4 Postprocessing

At the end of training or predicting, the normalized values on which the model operates are transformed back into the initial distribution. This, alongside with the many heuristically or procedurally set optional hyperparameters abstract a lot of the machine learning specific and time series specific decision making, making NeuralProphet approachable to forecasting practitioners.

2.4.1 Metrics

A few metrics are recorded by default: The configured loss function, RMSE, and MAE. Further, any user defined metrics will also be recorded over the training and validation runs.

(15)
(16)
where

2.4.2 Forecast Presentation

When used to predict, the model returns a dataframe with a column for each forecast component. These are the additive (or multiplicative) contributions of each component. They are individually displayed for each forecasted step ahead. Each of the components is referring to its row’s datestamp-located target and ordered by the age of the foregast. e.g. ’yhat3’ refers to the predicted -value for the current location, based on data available three steps ago. Similarly, ’ar2’ refers to the AR-effect predicted for the current location, based on data available two steps ago.

There are different plotting utilities available for visualizing the forecast itself, the forecast decomposition, the model parameters, and for specific forecast horizons of interest.

3 Experimental Setup

The experiments in this work serve to contrast the capabilities of NeuralProphet with Prophet. First, we compare the accuracy of the interpretable forecast components of Prophet and NeuralProphet on a set of synthetic datasets. Second, we compare the performance of the two frameworks on a number of real-world datasets. In the following, these experiments are described in detail.

3.1 Decomposition Demonstration on Synthetic Data

These experiments contrast the ability of Prophet and NeuralProphet to decompose a time series into individual components. Since we are interested in the decomposition accuracy of the different modelling components, we must know their ground truth. Due to a lack of suitable real-world datasets with known underlying components, we create synthetic datasets as the sum of a selection of generated time series with added noise. This allows us to compare the estimated decomposed components of both models to the original components.

3.1.1 Synthetic Dataset Composition

We generate the individual component time series to be identical to the type of components that Prophet and NeuralProphet claim to model. Thus, ideally, both models should be able to perfectly decompose the time series into its components, with some error due to additive noise included in the synthetic datasets.

Experiment Trend Seasonality Events Future reg. AR Lagged reg.
S-TS
S-EF
S-TSEF
S-mTSEF
S-AL
S-TSAL
S-TSEFAL
Table 2: Synthetic components included in the experimental scenarios.   indicates that the respective component is multiplied by the trend.

In Table 2 we display which components are included in the synthetic dataset of each experimental scenario. The comparison between the two models focuses on the accuracy of individual decomposed components. We also measure the overall accuracy and the compute time. Hereby, the emphasis is on the accuracy of the individual components as this quantifies the decomposition accuracy of the models.

As many real-world datasets exhibit auto-regressive properties or depend on lagged regressors, we also include three experiments with lagged components. However, only NeuralProphet claims to be able to model Auto-Regression and Lagged regressor components. In experiments including such lagged components, Prophet’s prediction for these components will be marked as zero.

Due to the freedom to choose offsets in their additive components, the models may find interpretable components which match the dynamics of the underlying components, but shifted by an offset. Thus, before computing any component-wise metrics, we zero-center all components.

3.1.2 Generated Component Time Series

Each experiment’s dataset consists of 5 independently generated time series of daily resolution with a length of 6000 samples each. Each time series is composed as the aggregate of multiple generated time series, each representing a particular component type. A component time series is scaled to range before being aggregated with the other components. The aggregate time series is finally re-scaled to range . To denote the building blocks of the synthetic experiments we use the following notation where each component is represented by a letter:

T: Trend

A piece-wise linear trend with one random change-point. Trend increases linearly before the change-point, and decreases linearly after the change-point.

S: Seasonality

The generated seasonal component time series consists of Fourier terms combined with random weights. The components are generated based on equation 4, with and

drawn from a random uniform distribution such that

and and . For the experiments we create two independent seasonal components: monthly and yearly . Both are added to the aggregate series and both are individually evaluated in their decomposition accuracy.

E: Events

The event component is a binary series with 25 occurrences of an event at random locations. Analogous to equation 12, we have , with where if an event of type occurs at timestamp and 0 otherwise. The initial effect strength will effectively become after scaling the main series.

F: Future Regressor

The future regressor component is generated as an AR(3) process with coefficients and additive white noise. After scaling the series to a range of , the regressor is added to the aggregate series with a weight of , which will be after scaling the main series.

A: Auto-Regression

The Auto-regressive component is drawn from an AR(2) process with coefficients , and additive white noise. This series is generated independent of the the other components. This is not identical to how NeuralProphet models the auto-regressive behavior. NeuralProphet does not decompose the inputs to auto-regression module, but rather uses the raw time series values as inputs.

L: Lagged Regressor

First, an independent time series is generated with an AR(2) process, analogous to future regressors. Next, we create the lagged regressor effect series , where each effect depends on a weighted combination of the last three observations from series :

where are weights from a random uniform distribution on . Finally, we utilize the series as the lagged regressor component.

m: multiplicative mode

In multiplicative mode, we multiply each component element-wise by the trend. For example, in S-mTSEF:

3.2 Main Benchmark on Real-World Datasets

We evaluate the forecasting accuracy of Prophet and NeuralProphet on a diverse collection of datasets covering many univariate forecasting tasks. The datasets span data lengths from one hundred to half a million samples, while data record intervals span minutely to monthly frequencies. This wide spectrum of univariate time series is in stark contrast to most benchmarking efforts focused on NN-based methods. The results represent realistic out-of-the-box performance across applications such as energy demand, tourism, environmental factors, and retail sales among others.

In this section, we present the selected datasets and the evaluation procedure in more detail.

(a) Histogram of lengths.
(b) Counts of sampling frequencies.
(c) Counts of data types.
Figure 1: Histograms of different characteristics of the datasets.

3.2.1 Selected Datasets

The datasets are selected to represent a wide range of univariate datasets. Figure 0(a) shows that the lengths of datasets range from one hundred to one million samples, with most datasets having around 10,000 samples. The data observation frequencies are distributed from monthly to minutely sampling frequencies. Figure 0(b) bins minutely to half-hourly frequencies together and displays the count of each type of frequency, with hourly frequencies being the most common.

Our collection of testing datasets covers a wide variety of subjects, including environmental, energy-related, social, and economic data, with environmental and energy-related datasets being the most common (Figure 0(c)).

(a) Boxplot of length for data frequencies.
(b) Boxplot of length for data types.
Figure 2: Distribution of data length for different data frequencies and data types.

Quantitative criteria for inclusion in our collection of testing datasets were: Univariate time series of minutely to daily sampling frequency, length of 1000 samples or more, and permissive license. We also include the four time series which Prophet uses as demonstration time series. Two of those datasets do not meet the inclusion criteria, as they are of monthly frequency and much less than 1000 samples long. They are included nonetheless, to make sure the datasets are not biased towards the proposed model NeuralProphet, but rather towards the comparison model Prophet. Further details on each dataset can be found in Table 8 (Appendix A).

Sources

Datasets are taken from various sources including the UCI Machine Learning repository (Dua and Graff, 2017), Monash time series repository (Godahewa et al., 2021), and Prophet Github repository (Taylor and Letham, 2017). Additionally, we collected and processed data from public energy and government webpages, including DOE (2015), ERCOT (2021), RTE (2018), and OpenEI (2015).

Preprocessing

Most of the selected datasets do not contain missing values, but for those that do, we impute the values using linear interpolation unless otherwise described. However, for datasets which are mostly zero or near-zero-valued, such as data covering solar, parking, wind, and air quality subjects, we fill missing values with zeros. Missing time-stamps are also regarded as missing data. Further, we manually preprocess each dataset and fix minor misalignments, such as time-stamps of 01:59 instead of 02:00 on data of hourly sampling frequency.

3.2.2 Benchmark Models

In this work, we compare the performance of time series forecasting packages which support full model automation, offer interpretability and scale to large datasets, without requiring domain knowledge in machine learning or time series. Prophet was the first such package, which we benchmark against its only fully featured successor, NeuralProphet.

Hyperparameters

Given that we aim to demonstrate the real world performance a non-expert forecasting practitioner can reasonably expect, we do not manually tune any hyperparameters. We only manually set hyperparameters in the synthetic benchmarking section if no automatic setting is available, or if needed to make models comparable.

We mention any manually set parameters explicitly when discussing results. Occurrences of manual model configuration, but not hyperparameter tuning, include: When studying the effect of adding auto-regression to the model, we set different numbers of lags , which are all shown in the results. For multi-step ahead forecasts, we further need to set the forecast horizon . In a reverse ablation study we further examine the impact of adding hidden layers to the model, hereby we also set the learning rate to the same value of for all of the model configurations and all datasets.

These ablation studies of number of lags, number of forecasts and neural network configurations will also serve to quantify the sensitivity of NeuralProphet to parameter choices.

3.3 Forecast Evaluation

In the following, we describe the forecasting task and how the forecast performance is evaluated.

3.3.1 Forecast Horizon

In this study, we are interested in univariate time series forecasting covering typical forecasting tasks. Given a desired number of steps to forecast into the future, we predict the next few consecutive values of the series, starting from the present timestamp. The number of next consecutive values to be forecasted from the last observed value is also called forecast horizon. If auto-regression is configured, the model will regress over the last few observations of the series itself. We treat each given forecasting horizon as a separate forecasting task.

A forecasting task with a horizon of ten targets involves predicting the next ten targets starting at the current timestamp , i.e. obtaining a series containing the predictions over the forecast horizon. In our empirical evaluation over real-world datasets, we evaluate the forecast horizons of . Hereby, a horizon of refers to forecasting the entire test set at once, as is possible with model based on time-features only, such as Prophet and default NeuralProphet.

3.3.2 Expanding Origin Backtest

Evaluation procedures in time series forecasting are diverse in nature. Naming conventions have only been establish for evaluations of classical time series models (Tashman (2000)). Conventions for evaluating machine learning based time series forecasting models are currently being established. We utilize a method based on a fusion of cross-validation and backtesting, which originate from machine learning and trading algorithm evaluation. This method, called expanding origin backtest, has recently been adopting by researchers and companies trying to evaluate machine learning based models on time series data (Uber (2018, 2020)).

The procedure consists of an outer and an inner loop. The term expanding origin backtest mostly refers to the outer loop, but is often used to describe the whole procedure. The inner loop is usually based on a variation of the classical evaluation procedures. In our case, the inner loop is a rolling origin forecast evaluation with non-constant holdout size ((Svetunkov, 2021)) with updating but without refitting of the model. In classical time series literature, refitting is referred to as reparametrization, while updating the model refers to updating the inputs to the model to produce the next forecast without refitting of model parameters. This means that a model will be re-fitted only times per evaluation procedure.

A more verbose name for our forecast evaluation procedure could be: K-fold expanding origin backtest with an inner rolling origin multi-step forecast evaluation with updating but without refitting of the model.

Figure 3: Expanding Origin Backtest with 5 folds. Test sets contain 10% of the time series each.
Outer Loop: K-Fold Expanding Origin Backtest With Constant Holdout Size

Analogous to -fold cross-validation, the overall dataset is replicated into folds, each containing training and testing data, covering different parts of the overall dataset. The model to be evaluated is fitted separately on each fold’s training data and evaluated on the same folds testing data. This signifies that the model

Unlike -fold cross-validation, the folds are not randomized. Each fold has a training set with values strictly occurring before the same fold’s test set, a visualization can be seen in Fig. 3.

We perform a 5-fold expanding origin backtest with test sets of 10 percent and a forward rolling of the origin on 5 percent per fold. This means that each test fold will share 50 percent of their data with the previous fold, and that each train fold will grow by 5 percent, while the size of the test fold remains constant. Hereby, the first fold will use the first 70 percent of data for training and the next 10 percent for test. The fifth fold will utilize 90 percent of data for training and the last 10 percent of data for testing. Thus, a model is tested five times on 10 percent of the data, covering the last 30 percent of data over all tests.

Inner Loop: Rolling Origin With Updating But Without Refitting

The inner loop is repeated for each train-test fold. First, the model is fitted over the train set. The test set evaluation is identical to a rolling origin forecast evaluation with a non-constant holdout size ((Svetunkov, 2021)) with updating but without refitting of the model.

Hereby, the forecast origin rolls forward by 1 step at a time, without re-training the model with the new data. This repeats through a single test fold shown in Figure 3 until the end of the test fold is reached. Auto-regressive models relying on lagged observations as inputs are updated with the new data point to produce the next forecast. If a model produces multiple forecasts per forecast origin, the average accuracy of all forecast steps is recorded. For models with unlimited forecast steps which do not depend on being updated with new observations, we forecast over the entire test set at once.

The single-forecast version of this procedure is also known as time series cross-validation in Hyndman and Athanasopoulos (2014), which we consider unfortunate naming, as it is not to be confused with the outer loop (Expanding Origin Backtest) which is more closely related to -fold cross-validation.

3.3.3 Metrics

We calculate metrics on each -step-ahead prediction versus corresponding actual values, and then average this error over all steps. For evaluation we use two metrics: Mean Absolute Scaled Error (MASE) and Root Mean Squared Scaled Error (RMSSE), as they are defined in Hyndman and Koehler (2006). MSSE is given as the RMSE of the evalutated method divided by RMSE of the Naïve prediction. This is the squared analogue of mean absolute scaled error (MASE). Hereby, Naïve refers to predicting the next observation to be identical to the previous value observed.

MASE and RMSSE metrics allow to evaluate the performance of a method compared to Naïve forecasting directly, as a value smaller one signifies an improvement over Naïve. Further, as the metrics are scaled quantities, different models can be compared across different datasets.

(17)
(18)
where

3.4 Computational Resources

Unless otherwise mentioned, all models were trained and evaluated on a regular laptop with an 8-core CPU (2020 Macbook Pro with M1 chip).

4 Results

4.1 Demonstration of Interpretable Component Decomposition

The results are displayed separately for experiments S-TS, S-EF, S-TSEF, and S-mTSEF without lagged components and for experiments S-AL, S-TSAL, and S-TSEFAL including lagged components. In the former, we expect both models to perform identically, and in the latter, we expect NeuralProphet to perform better.

(a) Experiments without lagged components.
(b) Experiments including lagged components.
Figure 4: RMSSE error of the model fit on the synthetic time-series itself. The plotted bar displays mean value and the line displays standard deviation over 5 runs.

On data without lagged components, NeuralProphet performs identical or slightly better than Prophet, as can be seen in figure 4 and figure 4(a). The overall goodness of fit on the ’y’ component (the sum of the components) is comparable or slightly better. The biggest difference is seen on S-mTSEF with multiplicative components, where NeuralProphet has a significantly lower error and standard deviation of error.

On experiments involving lagged components, NeuralProphet performs significantly better than Prophet, as seen in figure 4 and figure 4(a). Most of the difference comes from a better modelling of lagged regressors.

The fit on the series itself is not as relevant as the component-wise decomposition accuracy, as the overall goodness of fit does not protect from overfitting. The component-wise accuracy, is an adequate measure, as a model that can capture the true underlying dynamics will generalize well.

4.1.1 Component-Wise Decomposition Accuracy

(a) Experiments without lagged components.
(b) Experiments including lagged components.
Figure 5: Component-wise RMSE of predicted forecast component to underlying true generated component value. The plotted bar displays mean value and the line displays standard deviation over all synthetic datasets and over 5 runs. Note: Prophet does not support Auto-Regression and Lagged Regressor components. Prophet’s approximation of said components is implicitly assumed to be zero. Thus, the shown error for Prophet is equivalent to the component’s standard deviation.
Without Lagged Components

When compared on the component-wise accuracy NeuralProphet performs identical or better than Prophet, as can be seen in figure 4(a). Both models fit the monthly seasonality, events and future regressors near perfectly. Both models mostly struggle to find a good fit for the yearly seasonality. This is likely explained by the limited number of full annual periods in the data to fit the model on. A significant difference is observed in the goodness of fit for the trend component, where Neuralprophet exhibits a fraction of the error of Prophet.

Including Lagged Components

When modelling lagged components, NeuralProphet offers a significant improvement decomposing lagged regressors. However, the AR component is not significantly more accurate than Prophet’s zero-prediction. This is due to the fact, the NeuralProphet does not decompose the inputs to the AR module, but rather uses the raw time series values, while in this experiment, we evaluate the performance based on an independently added AR process component. On real applications, such a distinction largely does not matter.

Zooming in on the performance for individual experimental setups in Fig. 7 (Appendix B), we find no notable new observations differing from what we already discussed regarding figure 4 and figure 4(a).

4.1.2 Computational Time

Table 3 shows that training for NeuralProphet is on average 4.0 times slower compared to Prophet. However, both exhibit a large standard deviation, with Prophet having a long tail upwards and NeuralProphet downwards. Prediction time exhibits the inverse relationship, with NeuralProphet computing 13.5 times faster compared to Prophet. NeuralProphet has a tight spread of the prediction times with no upward outliers, while Prophet has a large standard deviation with a long tail upwards, as seen in figure 6.

Training Time Prediction Time
Model (seconds) (seconds)
NeuralProphet
Prophet
Table 3: Computational time in seconds to train a model and time predict over entire dataset. Showing mean and standard deviation over all folds for experiments without lagged components.

These observations mean that NeuralProphet may take some extra resources to be fit, but will need significantly less resources in deployment. Further, as NeuralProphet is faster in computing predictions by a magnitude, it makes it possible to deploy it in time-sensitive applications where the next prediction needs to be reliably computed in a fraction of a second.

Figure 6: Training and prediction times across over all folds for experiments without lagged components.

Thanks to being fully implemented in PyTorch, the NeuralProphet model may be parallelized in the future, allowing it to be deployed on GPUs. We would expect this to close the gap in training time, and the gap in prediction time to widened.

4.2 Main Benchmark Results

The overall results are displayed in table 4. We would like to highlight three observations: First, Prophet and NeuralProphet perform near identical in their default mode. However, both perform significantly worse than Naive one step ahead. Second, NeuralProphet forecasts improve substantially when configuring any amount of lags, consistently performing better than Naive forecasting one step ahead. More lags generally lead to a better performance. Third, when forecasting multiple steps ahead, even when we increase the horizon to 60 steps, NeuralProphet with any number of lags still performs better than Prophet or NeuralProphet in default configuration.

We also observe that the specific number of lags chosen has a small impact on the forecast accuracy, suggesting that the model is not sensitive to optimal parameter choices.

MASE for different forecast horizons
Model 1 step 3 steps 15 steps 60 steps
Prophet 8.54
NeuralProphet 8.49
NeuralProphet (1 lags) 0.83 N/A N/A N/A
NeuralProphet (3 lags) 0.72 N/A N/A N/A
NeuralProphet (12 lags) 0.69 1.16 N/A N/A
NeuralProphet (30 lags) 0.62 0.99 2.07 N/A
NeuralProphet (120 lags) 0.62 0.94 1.97 3.77
Table 4: Forecast error (MASE) for each forecast horizon across all datasets. The standard deviation is shown in paranthesis. The corresponding RMSSE values are given in Table 9 (Appendix C). Note: The models without lags can produce forecasts for an arbitrary number of forecasts. Note: Models with 120 lags were not evaluated over monthly datasets due to their short length.

All model parameters which were set for any of the models are explicitly displayed in Table 4: number of forecast steps and number of lags. No hyperparameters were manually set or tuned.

The choice of MASE or RMSSE as the error measure is perfectly adequate for the one step ahead forecasting task, but not for multiple steps, as it only measures the naive model’s one step ahead forecast error. When forecasting multiple steps ahead, an MASE value of 1 is no longer equal to a naive models performance, as a naive model would likely have a higher error for multiple steps ahead. Nonetheless, the MASE error remains a useful quantity to compare the forecast performance of different models across different datasets, as it scales the error into a somewhat standardized space.

4.2.1 Ablation Study: Depth of Neural Network

We further examine the impact of different Neural Network configuration options on the forecast accuracy for different forecast horizons. Overall, most configurations perform within range of their linear counterparts. Except for the longest forecast horizon of 60 steps, all NN configurations significantly outperform their linear counterpart. The specific choices of NN parameters have no discernible impact on the forecast accuracy. Thus, the model is insensitive to non-optimal NN parameter choices, and for short term forecasts, also to lack of any hidden layers.

MASE for different forecast horizons
Model 1 step 3 steps 15 steps 60 steps
(30 lags) 0.62 0.99 2.07 N/A
(30 lags, 1x32 NN) 0.60 0.93 1.88 N/A
(30 lags, 2x24 NN) 0.59 0.91 1.89 N/A
(30 lags, 4x16 NN) 0.61 0.93 1.91 N/A
(120 lags) 0.62 0.94 1.97 3.77
(120 lags, 1x32 NN) 0.63 0.92 1.81 3.06
(120 lags, 2x24 NN) 0.69 0.90 1.81 3.07
(120 lags, 4x16 NN) 0.66 0.96 1.86 3.03
Table 5: NeuralProphet (with NN) forecast error (MASE) for each forecast horizon across all datasets. The corresponding RMSSE values are given in Table 10 (Appendix C). Monthly datasets were excluded due to their insufficient sample sizes to fit the parameters of a NN. The standard deviation is shown in paranthesis. The model definition shows the number of input lags and the number of layers, including their hidden sizes. e.g. ’4x16 NN’ signifies that the model was set to use 4 hidden layers of dimension 16. Note: For all models with a NN, the learning rate was manually set, but not tuned, to a guesstimate of 0.001.

For this ablation study and any other mentions of a configuration involving a NN, the learning rate was manually set to a value of 0.001 for all model variations and datasets. This value is a guesstimate which was not tuned. It was set to avoid potential sources of variation from the learning rate range test. All other model parameters which were set for any of the models are explicitly displayed in Table 5: number of forecast steps, number of lags, number of hidden layers, and and hidden layer dimensions.

4.2.2 Results Based on Data Subject Type

human economic environment energy
-step MASE
Prophet 1.11 4.25 9.27 11.13
NeuralProphet 1.13 2.39 9.48 11.13
1-step MASE
NeuralProphet (1 lags) 0.76 1.01 0.81 0.83
NeuralProphet (3 lags) 0.75 0.98 0.76 0.63
NeuralProphet (12 lags) 0.66 0.97 0.75 0.60
NeuralProphet (30 lags) 0.62 0.82 0.74 0.48
NeuralProphet (120 lags) 0.67 0.75 0.75 0.46
NeuralProphet (120 lags, 4x16 NN) 0.75 0.77 0.83 0.48
3-step MASE
NeuralProphet (12 lags) 0.77 1.81 1.25 1.10
NeuralProphet (30 lags) 0.73 1.08 1.21 0.88
NeuralProphet (120 lags) 0.73 0.98 1.19 0.79
NeuralProphet (120 lags, 4x16 NN) 0.87 0.83 1.27 0.76
15-step MASE
NeuralProphet (30 lags) 0.89 1.58 2.52 2.17
NeuralProphet (120 lags) 0.93 1.30 2.45 2.04
NeuralProphet (120 lags, 4x16 NN) 0.88 1.18 2.40 1.85
60-step MASE
NeuralProphet (120 lags) 1.38 1.70 5.14 3.78
NeuralProphet (120 lags, 4x16 NN) 1.05 1.52 3.75 3.34
Table 6: Forecast error (MASE) for each dataset subject type for different forecast horizons. The standard deviation is shown in paranthesis. Note: Models with 120 lags were not evaluated over monthly datasets due to their short length.

In general, models with more lags tend to perform better. This is particularly evident in the case of datasets of energy subject type, where the 120-lags NeuralProphet model reduces the forecast error by on 1-step ahead and on 60-step ahead horizons. Further, we observe that the model configured with a 4 layer, 16 dimension NN is outperformed by linear models on 1 step and partially on 3 step ahead forecasts, while performing best on 15 step and 60 step forecasts for all data types.

Overall, configuring any amount of lags significantly improves the forecasting performance of NeuralProphet for any forecast horizon. Except for 60-step horizons on datasets of human subject type, where only a NN-based model outperforms Prophet and default NeuralProphet. Measurements of human activity tend to have strong seasonality and have a quickly fading impact of local context. When forecasting into the distant future, statistical patterns tend to dominate human activity data.

4.3 Task-dependent Strength of Prophet and NeuralProphet

In Table 7 we present our subjective advice on which model to use depending on the task at hand. These are qualitative suggestions based on the empirical evidence we found when evaluating the two models against each other.

Task Prophet NeuralProphet
Small dataset ()
Medium to large dataset ()
Long range forecast ()
Short to medium range forecast ()
Specific forecast horizon (e.g. )
Auto-correlation (dependence on previous observations)
Lagged regressors (dependence on observed covariates)
Non-linear dynamics
Global modelling of panel dataset
Frequent retraining on small datasets with
  constrained computational resources
Fast prediction (computational inference time)
Table 7: Comparison of Prophet’s and NeuralProphet’s task-dependent strength. ✕ marks which of the two models we suggest to use for the given task.

5 Conclusion

NeuralProphet is the first hybrid forecasting framework that meets the industry standards for explainability and simplicity-of-use set by Facebook Prophet. As NeuralProphet is based on PyTorch and trained with standard deep learning methods, any algorithm trainable by mini-batch Stochastic Gradient Descent can be included as a module, which makes it easy for developers to extend the framework with new features, and to adopt new research.

NeuralProphet can model local context with auto-regression and covariate regression, making it a suitable tool for time-series where the near-term future depends on the current state of the system. This is evidenced by our empirical results where NeuralProphet with auto-regression enabled significantly outperforms Prophet on all forecasting horizons. Both models perform comparably when configured identically. On short to medium-term forecast horizons, most configurations of NeuralProphet reduce the forecast error by to . Fusing the model with a Neural Network further improves forecast accuracy for medium to long range horizons.

In our ablation studies, we observe that NeuralProphet’s performance is not sensitive to the specific number of lags or the Neural Network configuration.

NeuralProphet is accessible to forecasting beginners thanks to solid default and automatic hyperparameters, while advanced users can incorporate domain knowledge with optional model customization. Upgrading Prophet with NeuralProphet empowers forecasting practitioners with an explainable model in a convenient and scalable framework, covering a wide range of forecasting applications.

Acknowledgements

We are thankful for the invaluable advice by our academic and industrial advisers at Meta, Facebook AI, Netflix, Uber, Monash and Skoltech; in particular, Italo Lima, Caner Komurlu, Alessandro Panella, Evgeny Burnaev, Sean Taylor, and Benjamin Letham. We gratefully thank Total Energies for their continued support; in particular, Lluvia Ochoa, Gonzague Henri, and Michel Lutz. We would also like to acknowledge the hard work of our lovely open-source contributors; in particular Mateus De Castro Ribeiro, Riley De Haan, and Bernhard Hausleitner.

The work presented herein was funded in part by a research agreement between Stanford University and Total Energies. The views and opinions of authors expressed herein do not necessarily state or reflect those of the funding source.

References

  • B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell (2017) Stan : a probabilistic programming language. Journal of Statistical Software 76. External Links: Document Cited by: §1.1, §2.3.
  • S. F. Crone (2008) NN5 competition. External Links: Link Cited by: §1.1.
  • N. DOE (2015) SanFrancisco hourly solar ground illumination data in w per m2 from national solar radiation database. External Links: Link Cited by: §3.2.1.
  • D. Dua and C. Graff (2017) UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences. External Links: Link Cited by: §3.2.1.
  • ERCOT (2021) ERCOT control area hourly load from january 2004 to september 2021, hourly load data archives. External Links: Link Cited by: §3.2.1.
  • R. Girshick (2015) Fast r-cnn. External Links: 1504.08083 Cited by: §2.3.1.
  • R. Godahewa, C. Bergmeir, G. I. Webb, R. J. Hyndman, and P. Montero-Manso (2021) Monash time series forecasting archive. External Links: 2105.06643 Cited by: §3.2.1.
  • A. C. Harvey and N. Shephard (1993) Structural time series models. In Handbook of Statistics, Vol. Vol. 11:Econometrics, North Holland, pp. 261–302. Cited by: §2.1.2.
  • R.J. Hyndman and G. Athanasopoulos (2014) Forecasting: principles and practice. OTexts. External Links: ISBN 9780987507105, Link Cited by: §2.1.3, §3.3.2.
  • R. J. Hyndman and A. B. Koehler (2006) Another look at measures of forecast accuracy. International Journal of Forecasting 22 (4), pp. 679–688. External Links: ISSN 0169-2070, Document, Link Cited by: §3.3.3.
  • R. Hyndman (2018) A brief history of time series forecasting competitions. External Links: Link Cited by: §1.1.
  • I. Loshchilov and F. Hutter (2019) Decoupled weight decay regularization. External Links: 1711.05101 Cited by: §2.3.3.
  • S. Makridakis and M. Hibon (2000) The m3-competition: results, conclusions and implications. Int. J. Forecast. 16 (4), pp. 451–476. Cited by: §1.1.
  • S. Makridakis, E. Spiliotis, and V. Assimakopoulos (2018) Statistical and machine learning forecasting methods: concerns and ways forward. PLOS ONE 13 (3), pp. e0194889. External Links: Document Cited by: §1.1.
  • J. Nocedal and S. Wright (2006) Numerical optimization. Springer Science and Business Media. Cited by: §2.3.
  • OpenEI (2015) SanFrancisco hospital hourly power consumption data of facility in kw. External Links: Link Cited by: §3.2.1.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alche-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. Cited by: §1.1.
  • RTE (2018) Record of hourly power consumption from french utility rte from january 2017 to december 2018. External Links: Link Cited by: §3.2.1.
  • L. N. Smith and N. Topin (2018) Super-convergence: very fast training of neural networks using large learning rates. External Links: 1708.07120 Cited by: §2.3.7.
  • L. N. Smith (2017) Cyclical learning rates for training neural networks. External Links: 1506.01186 Cited by: §2.3.4.
  • I. Svetunkov (2021) Forecasting and analytics with adam. Note: OpenForecast External Links: Link Cited by: §3.3.2, §3.3.2.
  • L. J. Tashman (2000) Out-of-sample tests of forecasting accuracy: an analysis and review. International Journal of Forecasting 16 (4), pp. 437–450. Note: The M3- Competition External Links: ISSN 0169-2070, Document, Link Cited by: §3.3.2.
  • S. J. Taylor and B. Letham (2017) Forecasting at scale. PeerJ. External Links: Document, Link Cited by: §1.1, §2.1.2, §3.2.1.
  • O. Triebe, N. Laptev, and R. Rajagopal (2019) AR-net: a simple auto-regressive neural network for time-series. External Links: 1911.12436 Cited by: §2.1.3.
  • Uber (2018) Omphalos, uber’s parallel and language-extensible time series backtesting tool. External Links: Link Cited by: §3.3.2.
  • Uber (2020) Building a backtesting service to measure model performance at uber-scale. External Links: Link Cited by: §3.3.2.

6 Appendices

6.1 Appendix A: Details on datasets

Name Type Subtype Freq T
air_passengers human tourism M 144 2.2
retail_sales economic sales M 293 2.5
peyton_manning human clicks D 2905 3.5
birth human population D 7305 3.9
load_hospital energy load H 8760 3.9
solar_sf environment sun H 8,760 3.9
load_rte energy load H 17518 4.2
load_victoria energy load 30min 17520 4.2
yosemite_temps environment weather 5min 18721 4.3
river environment water D 23741 4.4
price_ercot economic energy price H 25537 4.4
air_beijing environment air quality H 43800 4.6
sunspot environment sun D 73924 4.9
load_ercot energy load H 154872 5.2
power_wind energy wind 1min 493144 5.7
power_solar energy solar 1min 493149 5.7
Table 8: Summary of dataset characteristics. refers to the dataset length.

6.2 Appendix B: Decomposition RMSE per component per experiment

(a) Experiments without lagged components.
(b) Experiments including lagged components.
Figure 7: Here we display the decomposition performance for each experiment and component. Metric is given as RMSE of predicted forecast component to underlying true generated component value. The plotted bar displays mean value and the line displays standard deviation over 5 runs. Note: Prophet does not support Auto-Regression and Lagged Regressor components. Prophet’s approximation of said components is implicitly assumed to be zero. Thus, the shown error for Prophet is equivalent to the component’s standard deviation.

6.3 Appendix C: RMSSE per forecast horizon

RMSSE for different forecast horizons
Model 1 step 3 steps 15 steps 60 steps
Prophet 4.37
NeuralProphet 4.41
NeuralProphet (1 lags) 0.71 N/A N/A N/A
NeuralProphet (3 lags) 0.61 N/A N/A N/A
NeuralProphet (12 lags) 0.59 0.92 N/A N/A
NeuralProphet (30 lags) 0.57 0.82 1.51 N/A
NeuralProphet (120 lags) 0.51 0.76 1.45 2.46
Table 9: Forecast error (RMSSE) for each forecast horizon across all datasets. The standard deviation is shown in paranthesis. Note: The models without lags can produce forecasts for an arbitrary number of forecasts. Note: The model with 120 lags was not evaluated over monthly datasets due to their short length.
RMSSE for different forecast horizons
Model 1 step 3 steps 15 steps 60 steps
(30 lags, 1x32 NN) 0.51 0.76 1.42 N/A
(30 lags, 2x24 NN) 0.50 0.75 1.43 N/A
(30 lags, 4x16 NN) 0.51 0.76 1.44 N/A
(120 lags, 1x32 NN) 0.52 0.75 1.36 2.07
(120 lags, 2x24 NN) 0.55 0.74 1.37 2.09
(120 lags, 4x16 NN) 0.55 0.78 1.41 2.09
Table 10: NeuralProphet (with NN) forecast error (RMSSE) for each forecast horizon across all datasets, except monthly, due to their insufficient length to fit a NN. The standard deviation is shown in paranthesis. The model definition shows the number of input lags and the number of layers an their hidden sizes. e.g. 4x16 signifies 4 hidden layers of dimension 16.