1 Introduction
The recent rise of Ecommerce has created a need for operational productlevel demand forecasts. Indeed, modern standards in logistics require juston time resupply. Better accuracy on forecasts can lead to huge savings and cost reductions.
However, the business environment in Ecommerce makes this prediction complex because of the volatility of the sales. For instance, sales are affected by holiday effects, competitor behaviours, pricing changes,… Demand data carry various challenges such as nonstationary historical data, short times series, cannibalisation effects.
There are natural groupings of products, where items of the same type, sub type, mark or price segment fall into the same group. In those group, the key properties of each products are close to each other. For instance, the product of the category ’Toys’ will share the same seasonal behavior, and theirs sales will increase at Christmas.
The existing methods generally treat different series separately. It may work well with the physical retail, but the rapid rotation of products and the volatility of the demand in online retail create a need to provide models which share information between times series. [Yelland, 2010, Chapados, 2014, Trapero et al., 2015, Bandara et al., 2019].
In this study, we will propose a framework for realworld demand forecasting problem in Ecommerce. Our goal is to exploit the correlation between series to improve the accuracy of predictions. In particular, we want to tackle the problem of short history of time series.
In Section 2, we define formally the problem and propose a rapid review of previous work on the field. We present a preprocessing of the data in section 3. In section 4, we present the boosting model which gives us the best performance. Finally, we present the setup and results of our experiment on a realworld dataset on section 5.
2 Context
2.1 Formulation
We have a set of products, divided in K different categories such that ( disjoint union of ). We also have count times series , where represent the number of sales of the product during the week . This series are observed during weeks.
The support of this series, i.e. the number of nonzero week for each series is relatively small compared to . This means that we don’t observe a lot of history for each product individually. We suppose that the series follow a seasonality of period , mostly annual (), although an entire period is seldom observed.
Some externals features are important. Three types of covariates may be used:

Temporal features: Covariates that depend on the date only. They are common for all products. For instance, special events (Christmas, Black Friday) and the weatherrelated covariates fall into this category.

Longitudinal features: Covariates that depend on the product only. For instance the type of product, its mark. Longitudinal features allow to produce a hierarchy of products.

Mixed features: Covariates that depend on both. For instance, prices of a product may vary every week.
Our objective is to forecast values of this series for an horizon . More formally, we wish to develop a prediction model , such that, if we consider the past sales of a product i , the value
is an estimator of
for a set of learnable parameters .2.2 Related Work
A large amount of work has been published concerning the demand forecasting methods, for different applications (facilities, physical and online retail,…). The most widely used methods are classical times series models such as ARIMA models [Ediger and Akar, 2007]and exponential smoothing variants [Taylor, 2003]. However forecasting in the Ecommerce space commonly needs to address challenges such as irregular sale trends, presence of highly bursty and sparse sale data, etc. Some of those limitations can be overcome through modified likelihood function and extended linear models [Seeger et al., 2016]. But this methods fails to achieve good performance when the series are small.
Other regression methods have been used, such as generalized additive methods [Pierrot and Goude, 2011]
[Chen et al., 2004] and Recurrent Network [Borovykh et al., 2017]. All this method performs only univariate forecasting and therefore run into the same problems.Recently, neural networks
[Bandara et al., 2019]have been proposed to use crossseries information for the specific purpose of Ecommerce. They adapt a Long ShortTerm Memory Neural Network (LSTM) architecture to treat all the series at the same time. They also separate effects of longitudinal and temporal features and seem to have a good performance. This suggests that nonlinearity is important for modelling such data.
Bayesian hierarchical models are another promising models [Yelland, 2010, Chapados, 2014]. This framework fits a simple model for each time series with some constraints on the learnable parameters for the models. This constraints are based on prior assumption on the distribution of the learnable parameters among the different products . For instance, we can impose a prior distribution of the effect of some covariates. This allows to share information between times series and to separate the effects of each covariates. Moreover, it gives confidence bounds on our prediction,
3 Data Preprocessing
3.1 Preprocessing of sales features
There are two types of issues with sales data in Ecommerce. The first one is the presence of abnormally low values, or ’fake zeros’. Those low values can be due to stock shortages, network issues or modification of the search engine on the website. As we want to predict the demand, we have to identify and replace this values. ’Fake zeros’ can be identified through different methods, using stock information and different threshold. We replace them using a standard univariate prediction algorithm based on classical times series methods on each series.
The second issue is the presence of abnormaly high values. These values are informative, because they provide us with information on the effect of sales. However, those values are problematic when used as lags features, because they may suggest a higher level of sales than expected, or misinform about trend and seasonality. Therefore, we construct ’smoothed sales’ eliminating the values which exceed times the standard variation. More precisely:

If , then .

Otherwise .
3.2 Trend and seasonality
We want to enrich the features with information about trend and seasonality. The goal is to produce features which can be compared between products. This will allow to use them as a global features for all products.
First, we generate some normalized trend features using the smoothed values. We use both an annual trend calculated by regression over the previous year data (when available) and a local trend.
The treatment of seasonality is more complex, because of the short nature of this series. We use a variant of the procedure describe in [Kumar et al., 2002] to produce a seasonality factor for each product. Let us sketch this procedure.
First, we normalize the sales numbers for each year. We want to ensure that each products has the same mean level. For each product , considering the number of weeks during the year during which the product was actually on sales, we note for a date in this year (i.e ):
Second, we compute the mean of the standardize values in each categories . We therefore have standardized seasonality for each product categories. The core idea is to suppose that there is a common multiplicative seasonality
for all product of this categories. Therefore, if the date on which the product was placed into the market are uniformly distributed, the calculated mean is directly proportional to the seasonality.
However, due to the erratic natures of Ecommerce sales data, at this step, the calculated seasonality are often not informative enough and contains some noise. That is why we use a time series clustering algorithm to cluster the seasonality of the different categories. This clustering is based on the Euclidian distance between seasonality patterns,but also take into account the variance of the seasonality in the category.
We finally produce a small numbers of seasonality patterns. This patterns allows us to produce a seasonality feature for each product according to its category, even if we don’t have any information on its previous sales.
3.3 Others features treatment
Encoding of categorical features
Longitudinal features are often categorical features, so we need to encode them to use them with numerical algorithm. However, due to the high number of categories, standard Onehotencoding creates a lot of features and imposed to use very simple models for the regression.
Two possibilities sill remains for more complex models. First, the use of an ordinal encoding is simple and easy to implement, but it introduce an order on features, which doesn’t really make sense.
Second, it is also possible to hash the features in order to obtain a small number of columns. This avoid partially the ordering of the features. However, it now becomes harder to make the importance of each value explicit.
Unpredictible features
Some mixed or temporal features, like weather or prices cannot be for prediction, because they cannot be predicted for the horizon where we want to predict information. However, this features can be used to train the model on the past data, in order to explain abnormaly low (or high) values in the past. We can then performs prediction using a guess on the future values. For instance, we can take the seasonal value of weather features, or the mean observed price of the past data. This scheme has a weakness : the fact that we use exact past values leads machine learning algorithm to give to much importance to this features.
4 Model
4.1 Learning schemes
We consider our problem of multiple time series forecasting as a regression problem. Our objective is the sale values corrected from ’fake zeros’ at the horizon . We use the past values of the smoothed sales as features, as described in 3.1. We therefore have a prediction
The hypothesis is that represent ’normal’ level of sales. It is suppose to remove the effects of punctual effects, like special offers. Features gives us information about the difference . Therefore, we prefer using smoothed lagged values as features instead of lagged .
However, we cannot completely separate the estimation of and , because the value of strongly depend on the level of . And this is hard to distinguish features that affects only from the features which affect .
We used as learning set the values of the tuples for all products before a given date. Hyperparameters are selected using a simple validation period.
We sum up everything on the figure 1.
4.2 Loss
We distinguish the evaluation metrics, used for the evaluation of the our prediction, and the learning metrics, used in our models to evaluate the dispersion of the times series.
We used the standard Rooted Mean Squared Error (RMSE) and Mean Absolute Error (MAE) as evaluation metrics, using the price of the product as weight.
This metrics are commonly used in supply chain forecasting. However, RMSE are sensitive to extreme values, and both metrics tends to underestimate prediction.
Therefore, we proposed to use a Poisson Loss as learning metrics. This metrics has already been used in [Borovykh et al., 2017] . We suppose that
follows a Poisson distribution of parameter
The criterion we want to optimize is then the loglikelihood of the value , or Poissonloss:
It is a natural choice for three reasons.
First, we observed that the sales time series are strongly heteroscedastic, and that the local variance of the series is strongly correlated with the local mean of the time series.
Second, it allows us to limit the effects of the presence of outliers in our data. Indeed, higher values are more likely than in a gaussian white noise modelling for instance.
Third, the positive integer values are naturally modelled by counting process. We can suppose, that for each week and each product , the client arrived following a Poisson process, and that the parameter of this Process change each week.
4.3 Algorithms
On the one hand, the choice of the machine learning algorithm to compute and is crucial. On the one hand, it must be flexible enough to use different kind of features, and to select the more useful features. In particular, it should be able to resist to redundant or correlated features. On the other hand, it must be consistent enough to avoid overfitting. Finally, due to the large number of series and features, it must be fast enough to handle large data.
We test different models. For each, we try to perform variable selection through simple validation. We also try to normalize the features in the different case.
Linear models
It is possible to use different linear models with OneHot Encoded categorical data. With Lasso or Elastic Net penalization, it performs a good variable selection. However, it doesn’t model threshold and other nonlinear effects. And it doesn’t use crossfeatures effects.
Generalized Additive Models
A standard generalisation of the Linear models are the Generalized additive models (GAM), often used in time series prediction [Hastie, 2017]. It consists in the regression of a function on a spline base, which consist on simple function of the parameters. It allows to treat nonlinear effects, but crossfeatures effects have to be imposed manually. Here, we haven’t been able to find a configuration of GAM models which offers good performances.
Random Forests Regression
Random Forests are a type of bagging algorithms, which consists in the construction of different regression tree by boostrap, and then produce a prediction based on the predictions of the different trees. It allows to take threshold and crossfeatures effects into account. And it can be parallelized, which allows for a fast computation.
Random forests are well suited for the estimation of , and therefore obtains good performance on the datasets.
Boosting Tree Algorithm
Contrary to tree bagging methods, tree boosting methods implements a sequential pooling of the prediction of different trees. They have recently receive a lot of attention, due to their performance on real case. Here we mostly use XgBoost
[Chen and Guestrin, 2016], which is a fast gradient boosting implementation .
Here,it keeps the advantages of Random Forests, but have better performance. The price is a generally higher training time, because the training cannot be parallelized. XgBoost hyperparameters are selected via validation. The scope of validation are presented on the table 1. We use early stopping to reduce the training time.
Parameter  Min value  Max value 

learning rate  0.01  0.3 
min split loss  0.01  0.2 
max depth  5  8 
round evaluation  1000  5000 
We also tried to use LightGBM [Ke et al., 2017], which runs faster, but obtains slightly worst performances.
5 Experiments
5.1 Dataset
We use our forecasting framework on a dataset collected from Cdiscount.com. It collects the sales of products, in categories during approximately 4 years. On the figure 2, we represent the repartition of the length of the sales for the products of the datasets. A large proportion of them are sold during a short period.
We define our forecasting horizon as 6, and we train the model on the first 170 weeks, then use the 10 next weeks for validation of the hyperparameters. To evaluate the model, we use the last 19 weeks. This weeks correspond to the lasts week of the year 2018 and the beginning of 2018. They contains therefore a lot of variability (Black Friday, Christmas, Winter Sales)
There are 3 sets of products, named A, B and C. The first ones regroups the products which sales the most, the last one the products which sales the least.
5.2 Benchmark and XgBoost variants
We compare our algorithms to both homemade and stateoftheart benchmark. First, we use a stateoftheart algorithm of the industry, called Benchmark. This algorithm performs a prediction for each series using a classification of the times series and business knowledge.
We also use a simple exponential smoothing algorithm as benchmark(ES) .
We also present the performance when we use different machine learning algorithm, than XgBoost, for instance Random Forests (RF).
Another advantage of the global method of prediction is that it allows coldstart prediction, i.e. prediction on new series without history. In order to have fair evaluation with the benchmark, we remove the first 6 weeks of life of the products, where our algorithm are able to make prediction, but not benchmark algorithms.
5.3 Results
Table 4 shows the performance of the prediction for two evaluation metrics for the total set of products, and for the different set A,B and C. The RMSE values are expressed in terms of k€. We present different version of our algorithm, depending on the encoding of categorical features (ordinal or hashing) and the use of the seasonality features (described in 3.2).
We can see that XgBoost outperforms the Benchmark for all categories. It reduces the MAE by approximately 5 % of and the RMSE by 10 % on the whole datasets.Globally, the relative gain is more important in RMSE, than in MAE, which shows that it mostly reduces the biggest gap of performances than it improves the average prediction.
Ordinal Encoding, strangely, seems better than Hashing. And models with seasonality are better on the most sold product of the group A, which present the highest business impact.
Framework  All  A  B  C  

ML Algo.  Configuration  Encoding  RMSE  MAE  RMSE  MAE  RMSE  MAE  RMSE  MAE 
ES  3.83  1.09  5.68  1.03  1.12  1.06  1.28  1.31  
RF  with seas.  Ordinal  3.09  0.831  5.27  0.796  1.66  0.892  1.32  0.92 
XgBoost  Poisson/with seas.  Ordinal  2.67  0.725  4.59  0.674  1.41  0.801  1.20  0.874 
XgBoost  Poisson/without seas.  Ordinal  2.76  0.730  4.72  0.681  1.39  0.800  1.21  0.872 
XgBoost  Poisson/with seas.  Hashing  2.78  0.728  4.75  0.685  1.41  0.816  1.21  0.893 
XgBoost  Poisson/without seas.  Hashing  2.79  0.740  4.77  0.689  1.40  0.817  1.22  0.893 
Benchmark  3.01  0.758  4.97  0.688  1.77  0.907  1.56  0.982 
If we look closely at the performance, we see that our algorithm is particularly performing during the first week of the product cycle. We present this results in the 3 for the Benchmark and our framework with the seasonality and ordered features. The high variability of RMSE is due to the small number of product concerned and the high variability of the studied period. Nevertheless, we can observe that, at the beginning of the product cycle, our framework strongly outperforms the benchmark. We decreases for instance the MAPE by % and the RMSE by % for the product with 10 weeks of historical data. This difference decrease with time, as the benchmark gain sufficient history for its prediction.
Product cycle  Framework  Benchmark  

Length  RMSE  MAE  RMSE  MAE 
8  3.71  1.04  6.04  1.77 
9  3.28  0.879  6.43  1.36 
10  3.67  0.920  6.39  1.21 
11  11.48  1.15  11.97  1.31 
12  5.33  0.867  7.19  0.928 
6 Conclusion
Improving demand forecasting in Ecommerce is possible through the use of global methods, which shares information between times series. In our paper, we proposed to use a gradient boosting method to do so. It allows us to exploits crossfeatures and nonlinear effects that exists in the Ecommerce data. Moreover, it also us to performs coldstart prediction, with very few history on our products.
We also proposed several tricks to tackle the difficulties inherent to Ecommerce data. In particular, we proposed a way to compute seasonality for product thanks to the behavior of the rests of our products.
Finally, we evaluate our methodology on a realworld dataset, with a realistic number of products and we outperforms state of the art solutions for demand forecasting.
References
 Bandara et al., 2019 Bandara, K., Shi, P., Bergmeir, C., Hewamalage, H., Tran, Q., and Seaman, B. (2019). Sales demand forecast in ecommerce using a long shortterm memory neural network methodology. arXiv preprint arXiv:1901.04028.

Borovykh et al., 2017
Borovykh, A., Bohte, S., and Oosterlee, C. W. (2017).
Conditional time series forecasting with convolutional neural networks.
stat, 1050:16.  Chapados, 2014 Chapados, N. (2014). Effective bayesian modeling of groups of related count time series. In International Conference on Machine Learning, pages 1395–1403.
 Chen et al., 2004 Chen, B.J., Chang, M.W., et al. (2004). Load forecasting using support vector machines: A study on eunite competition 2001. IEEE transactions on power systems, 19(4):1821–1830.
 Chen and Guestrin, 2016 Chen, T. and Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794. ACM.
 Ediger and Akar, 2007 Ediger, V. Ş. and Akar, S. (2007). Arima forecasting of primary energy demand by fuel in turkey. Energy policy, 35(3):1701–1708.
 Hastie, 2017 Hastie, T. J. (2017). Generalized additive models. In Statistical models in S, pages 249–307. Routledge.

Ke et al., 2017
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu,
T.Y. (2017).
Lightgbm: A highly efficient gradient boosting decision tree.
In Advances in Neural Information Processing Systems, pages 3146–3154.  Kumar et al., 2002 Kumar, M., Patel, N. R., and Woo, J. (2002). Clustering seasonality patterns in the presence of errors. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 557–563. ACM.
 Pierrot and Goude, 2011 Pierrot, A. and Goude, Y. (2011). Shortterm electricity load forecasting with generalized additive models. Proceedings of ISAP power, 2011.
 Seeger et al., 2016 Seeger, M. W., Salinas, D., and Flunkert, V. (2016). Bayesian intermittent demand forecasting for large inventories. In Advances in Neural Information Processing Systems, pages 4646–4654.
 Taylor, 2003 Taylor, J. W. (2003). Shortterm electricity demand forecasting using double seasonal exponential smoothing. Journal of the Operational Research Society, 54(8):799–805.
 Trapero et al., 2015 Trapero, J. R., Kourentzes, N., and Fildes, R. (2015). On the identification of sales forecasting models in the presence of promotions. Journal of the operational Research Society, 66(2):299–307.
 Yelland, 2010 Yelland, P. M. (2010). Bayesian forecasting of parts demand. International Journal of Forecasting, 26(2):374–396.
Comments
There are no comments yet.