1 Introduction
Timeseries forecasting is an important task in both academic [4] and industry [16], which can be applied to solve many real forecasting problems including stock, watersupply, and sales predictions. In this paper, we study the forecasts of retailers’ future sales at Tmall.com^{1}^{1}1https://en.wikipedia.org/wiki/Tmall
, one of the world’s leading online businesstocustomer (B2C) platform operated by Alibaba Group. The problem is essentially important because accurate estimation of future sales for each retailer can help evaluate and assess the potential values of small businesses, and help discover potentials for further investment.
However, accurately estimating each retailer’s future sales on Tmall could be way challenging in several reasons. First of all, naturally different goods or products are sold with strong seasonal properties. For example, most of fans are sold in summer, while most of heaters are sold in winter, i.e. we can observe strong seasonal properties on different groups of retailers. Secondly, the distribution of sales among all retailers demonstrates an overtail power law distribution , i.e., the retailers’ sales spread a lot. Naively ignore such issues could make the performance much worse.
In this paper, we analyze and summarize the characteristics of the sales data at Tmall.com, and propose two mechanisms to improve forecasting performance. On one hand, we propose to extract seasonalities from groups of retailers. Specifically, we characterize the seasonal evolutions of sales by first clustering retailers into groups and then study the seasonal series as decompositions from a series of Fourier basis functions. The results of our approach can be utilized as features, and simply added to the feature space of any established machine learning toolkits, e.g., linear regression, neural network, and treebased model. On the other hand, we place distribution transformations on the original retailers’ sales. Specifically, we observe that the distribution over retailers’ sales follows a Tweedie distribution after we transform retailers’ sales by logarithmic. Thus, we propose to optimize Tweedie loss for regression on the logarithmic transformed sales data instead of other losses on the original ones. Empirically, we show that the proposed two mechanisms can significantly improve the performance of predicting retailers’ future sales by applying them into both neural networks and tree based ensemble models.
We summarize our main contributions as follows:

By analyzing the Tmall data, we obtain two observations, i.e., sales seasonality after we group different categories of retailers and a Tweedie distribution after we transform the original sales.

Based on our observations, we design two general mechanisms, i.e., seasonality extraction and distribution transform, for sales forecasting. Both mechanisms can be applied to most existing regression models.

We apply the proposed two mechanisms into two popular existing regression models, i.e., neural network and Gradient Boosting Decision Tree (GBDT), and the experimental results on Tmall dataset demonstrate that both mechanisms can significantly improve the forecasting results.
2 Data Analysis and Problem Definition
In this section, we first describe the sales data and features on Tmall. We then analyze the seasonality and distribution of sales data. Finally, we give the sales forecasting problem a formal definition.
2.1 Sales Data and Feature Description
Tmall.com is nowadays one of the largest businesstocustomer (B2C) Ecommerce platform. It has more than 180,000 retailers. Among of those retailers, there could be giant retailers like Apple.com, Prada, and together with small businesses. The platform is selling hundreds of thousands products in diverse categories, e.g., ‘furniture’, ‘snack’, and ‘entertainment’.
Besides category information, the other features of retailers on Tmall can be mainly divided into three types: (1) The basic features that are able to reflect the marketing and selling capability of each retailer. For example, the amount of advertisement investment, the number of buyers, the rating/review given by the buyers, and so on. (2) The highlevel features that are generated from historical sales and basic feature data. Suppose a retailer generates a series of sales data, e.g., . We are currently at time and want to forecast . Then can be taken as a feature which indicates the sale amount of previous period, is a feature that indicates the increasing speed of the sales, and is a feature that denotes the accelerated speed of the sales. Similarly, we can generate other highlevel features, e.g., the number of buyers, using the basic features available. (3) The seasonality features that are generated by using other machine learning techniques, which aim to capture the seasonal property of different retailers. We will present how to generate these features in Section 3.1.
2.2 Seasonality Analysis
The retailers’ sales tend to have different seasonality due to the seasonal items they sell. Take vegetables for example, tomatoes and cucumbers are usually sold more in summer, while celery cabbage is likely sold more in winter. Although the GMV demonstrate seasonal properties, as is shown in Figure 1, the analysis of the seasonality related to the gross merchandise volume is relative meaningless for the prediction on each retailer. In contrast, the seasonality analysis on each single retailer makes the analysis cannot generalize well in the future. Instead, we further investigate the seasonalities in different groups of retailers.
By analyzing the sales data on Tmall, we observe different seasonal patterns on different categories of retailers. Figure 2 shows the sales of four different categories of retailers, i.e., ‘Women’s Wearing’, ‘Men’s Wearing’, ‘Snack’, and ‘Meat’, where we use two year’s sales data from January 2015 to December 2016. We can observe that, ‘Women’s Wearing’ and ‘Men’s Wearing’ show quite similar seasonal patterns, i.e., they both reach peak in summer (July or August) and decline to nadir in winter (January). On the contrary, ‘Snack’ and ‘Meat’ show different seasonal patterns. In summary, the seasonalities under different categories could differ quite a lot. Thus if we can somehow partition the retailers into appropriate groups, the shared seasonality among retailers in one group could be statistically useful for characterizing each retailer in the group. Given a group of retailers, how can we characterize the seasonality for the group remains to be solved. We will discuss our approaches in Section 3.1.
2.3 Sale Amount Analysis
The sales of each retailer over timeseries could be much challenging. To illustrate this, we show the histograms over sales in Figure 3 (left). It shows that the sales could be much diverse across over all the retailers. In practice, this is very hard to formalize as a trivial regression problem because the errors on those sales from giant retailers could dominate the loss, e.g. least squared loss.
Instead, after we do a logarithmic transformation on the sales of each retailer, we found that the histogram appears to be a clear Tweedie distribution, i.e. Figure 3 (right), which will be further described in details in Section 3.2. As a result, such transformation on the dependent variables makes our forecasting much easier. Note that, there are always some retailers’ sale around zero. This is because some shops on Tmall will close or forced to be closed by Tmall due to some reason from time to time, and correspondingly, some shop will be newly opened or reopen. Consequently, some retailers’ sales are around zero.
2.4 Problem Definition
Assuming any retailer in Tmall.com as , at month , We formalize the sales forecasts problem as a regression problem, i.e. given the features of each retailer , where is the feature dimensionality and denotes the months before , and the known sales of each retailer , we want to learn a function: , where denotes the features of retailers at month and denotes its sales at month . That is, give the features of any retailer at month , we want to predict their corresponding sales at month .
3 Model Design and implementation
In this section, we will present our designed two mechanisms, i.e., seasonality extraction and distribution transform, for sales forecasting.
3.1 Seasonality Over Groups of Retailers
As we reported in Section 2.2, the seasonalities under different categories could differ quite a lot, therefore, the remaining problem is that how should we partition the retailers into appropriate groups. Instead of manually partition retailers, we adopt clustering methods for timeseries data [14] to do so. Specifically, we group the retailers by using the basic and highlevel features described in Section 2.1, so that retailers that have similar features are grouped together.
After we partition retailers into groups, we adopt discrete Fourier transform to automatically extract the seasonality for retailers in different groups. Formally, assuming a group of sellers with expected amount of sales annotated as
at time , thus results into a series of expected sales as observations, i.e. . Each periodic function can be expanded by the Fourier series, which is a linear combination of infinite sines and cosines,(1) 
where are parameters to be optimized. As a result, the function can be represented by a Fourier basis.
We now show the results of extracted seasonalities on different groups of retailers. We randomly select two groups of retailers and show their seasonalities and estimates for sales in Figure 4, where we find the two groups of retailers mainly sell purses and accessaries, respectively. In Figure 4, we use the first 15 months’ data to learn the parameters in Eq.(1), and further use them to predict the seasonality of all the months’ data. It is obvious that our extracted seasonality is very close to the real one in both groups. Similarly, we can extract seasonality for other features, e.g., the number of buyers and the among of advertisement investment, by using the same method.
In practice, we use two types of features extracted from such seasonal patterns: (1) the seasonal values of the target we want the extract, e.g., sales and the number of buyers, in a window of 12 months centered around the month
. (2) the variation, i.e. the difference among those seasonal values. Hopefully, such seasonality or trend measures for each group of sellers can be fed into any classifiers, so as to characterize the seasonal patterns for each seller. We will empirically study the effectiveness of these seasonality features in experiments.
3.2 Tweedie Loss For Regression
As we described in Section 2.3, based on our observation, the sales on Tmall will clearly obey Tweedie distribution after a logarithmic transformation. From Figure 3
(right), we see that the sales after logarithmic transformation is a combination of Poisson distribution and Gamma distribution, which is a special case of Tweedie distribution, i.e., a compound PoissonGamma distribution. That is, we assume that (1) the status of retailers, i.e., closed or not, are independent identically distributed and they obey Poisson distribution; (2) the sales of retailers are also independent identically distributed and they obey Gamma distribution. The Tweedie distribution was first proposed in
[22], and then officially named by Bent Jørgensen in [9], which belongs to the class of exponential dispersion.Tweedie distribution has been popularly used in insurance scenarios [23]. We now formally describe Tweedie distribution in sale forecasting scenario. Suppose Let
be a Poisson random variable denoted by
, and let be independent identically distributed gamma random variables denoted by with meanand variance
. We also assume that is s independent of . Define a random variable by(2) 
We can see from Eq.(2) that is the Poisson sum of independent Gamma random variables, which is also called compound PoissonGamma distribution. In sales forecasting scenarios, can be viewed as the total number of retailers, as the opened retailers, and as the sale amount of retailers . Note that the distribution of
has a probability mass at zero, i.e.,
. The existing research has proven that, if we reparametrize byEq.(2) then becomes the form of a Tweedie model with and . Here, the boundary cases and correspond to the Poisson and the gamma distributions, respectively. The compound PoissonGamma distribution with can be seen as a bridge between the Poisson and the Gamma distributions.
The loglikelihood of this Tweedie model for the sale of a retailer is
(3) 
where the normalizing function can be written as
and is an example of Wright’s generalized Bessel function [22].
After that, given the parameter for Tweedie model, the other parameters can be efficiently solved by using maximum loglikelihood approach [23]. The Tweedie model can be naturally combined with most existing regression models, e.g., NN and GBDT. That is, we can train a Tweedie loss NN model or GBDT model instead of the models with other losses, e.g., square loss [23]. Obviously, the results of Tweedie loss regression are much better than those of other loss regression, e.g., square loss, as will be shown in experiments. This is because Tweedie loss fits the real sales distribution after logarithmic transformation of sales, as is shown in Figure 3 (right).
4 Empirical Study
In this section, we first describe the dataset and the experimental settings. Then we report the experimental result by comparing with various stateoftheart sales forecasting techniques. We finally analyze the effect of Tweedie distribution parameter () on model performance.
4.1 Dataset
Features. As we described in Section 2, the features of retailers mainly contain three types, i.e., the basic features, highlevel features that are generated from historical sales and basic feature data, and the seasonality features that are generated by using other machine learning techniques. This includes 189 features in total, where there are 79 basic features, 102 highlevel features, and 8 seasonality features as we discussed in section 3.1.
Samples. We choose the samples (retailers) during Jan. 2015 and Dec. 2016 on Tmall. Note that we only focus on forecasting the relative small retailers whose monthly sale amount is under a certain range (300,000). Because, in practice, the sales of big retailers are very stable, and it is meaningless to forecast their sales. After that, we have 783,340 samples. We use the samples in 2015 as training data, the samples from Jan. 2016 to June 2016 as validation, and the samples from July 2016 to Dec. 2016 as test data.
4.2 Experimental Settings
Evaluation metric. Most existing research use errorbased metric, e.g., Mean Average Error (MAE) and Root Mean Square Error (RSME), to evaluate the performance in timeseries forecasting [6, 1]. However, these metrics are way sensitive to those retailers whose sales are large. As we can see in Figure 1, the sales on Tmall spread a lot. In practice, the forecasting precision of the retailers with small sales counts the same as the ones with big sales. Therefore, we propose to use Relative Precision (RP) for sales forecasting on Tmall, which is defined as
(4) 
where is the total number of retailers, as the real sale and as the forecasted sale, , and is the indicator function that equals to 1 if the expression in it is true and 0 otherwise.
As we can see from Eq. (4), RP is actually the percentage of the retailers whose forecasting error is in a certain range . Intuitively, the smaller is, the smaller RP will be. Because one has higher demanding for the forecasting performance when is smaller.
Comparison methods. Our proposed mechanisms, i.e., seasonality extraction and distribution transform, has the ability to generalize to most existing regression algorithms. To prove this, we apply the mechanisms into two popular regression models, i.e., Neural Network (NN) and Gradient Boosting Decision Tree (GBDT). We summarize all the methods, including ours, as follow:

NNS uses extra our proposed seasonal feature in Section 3.1 for NN, and its comparison with NN will prove the effectiveness of seasonality extraction.

NNT uses our proposed Tweedieloss in Section 3.2 for NN, and its comparison with NN will prove the effectiveness of our proposed Tweedieloss regression after sale distribution transform.

NNST extra uses our proposed seasonal feature in Section 3.1 for NNT, which is the application of our proposed two mechanisms in NN.

GBDT
is developed for additive expansions based on any fitting criterion, which belongs to a general gradientdescent ‘boosting’ paradigm and suits for regression tasks with many types of loss functions, e.g., leastsquare loss, Huber loss, and Tweedie loss
[7]. Specifically, we use the GBDT algorithms implemented on Kunpeng [26]—a distributed learning system that is popularly used in Alibaba and Ant Financial, where we also use square loss. 
GBDTS uses extra seasonal feature for GBDT, similar as NNS.

GBDTT uses Tweedieloss for GBDT, similar as NNT.

GBDTST uses extra seasonal feature for GBDTT, similar as NNST.
Parameter setting
. For NN, we use a threelayer network, with Rectified Linear Unit (ReLU) as active functions, and optimized with Adam
[12] (learning rate as 0.1). For GBDT, we set tree number as 120, learning rate as 0.3, and regularizer of norm as 0.5. We will study the effect of parameter of Tweedie regression in Section 4.4.4.3 Comparison Results
Model  NN  NNS  NNT  NNST  GBDT  GBDTS  GBDTT  GBDTST 

RP@0.1  0.1693  0.1723  0.3236  0.3338  0.1719  0.1859  0.3159  0.3263 
RP@0.2  0.1933  0.1987  0.3484  0.3534  0.2095  0.2242  0.3394  0.3520 
RP@0.3  0.2603  0.2657  0.3950  0.3956  0.2681  0.2816  0.3821  0.3966 
We summarize the comparison results in Table 1, and have the following comments.
(1) The forecasting performance of NN and GBDT are close, and the performance of GBDT is slightly higher than NN. This is because GBDT can naturally consider the complicate relationship, e.g., cross feature, between features. (2) Our proposed seasonality extraction mechanism can clearly improve the forecasting performance of both NN and GBDT. For example, GBDTS improves the forecasting performance of GBDT by 8.14% in terms of RP@0.1, and GBDTST further improves the forecasting performance of GBDTT by 3.29%. (3) Our proposed distribution transform mechanism can significantly improve the forecasting performance of both NN and GBDT. For example, NNT improves the forecasting performance of NN by 91.14% in terms of RP@0.1, and NNST improves the forecasting performance of NNS by 93.73% in terms of RP@0.1 (4) In summary, our proposed two mechanisms consistently improve the forecasting performances of both NN and GBDT models. Specifically, NNST improves the forecasting performance of NN by 97.14%, 82.82%, 51.98% in terms of RP@0.1, RP@0.2, and RP@0.3 respectively. And, GBDTST improves the forecasting performance of GBDT by 89.82%, 68.10%, 47.93% in terms of RP@0.1, RP@0.2, and RP@0.3 respectively. The results not only demonstrate the effectiveness of our proposed mechanisms, but also indicate the generalizability of them.
4.4 Effect of Tweedie distribution parameter ()
As described in Section 3.2, the Tweedie distribution parameter () bridges the Poisson and the Gamma distributions, and the boundary cases and correspond to the Poisson and the Gamma distributions, respectively. The effect of Tweedie distribution parameter () on GBDTST is shown in Figure 5, where we use the validate data. From it, we find that GBDTST achieves the best performance when . This indicates that the real sales data on Tmall fit the Tweedie distribution when .
5 Related Works
In this section, we will review literatures on timeseries forecasting, which are mainly in two types, i.e., linear model and nonliner model.
5.1 Linear Model
The most popular linear models for timeseries forecasting are linear regression and Autoregressive Integrated Moving Average model (ARIMA) [8]. Due to their efficiency and stability, they have been applied to many forecasting problems, e.g., wind speed [10], traffic [21], air pollution index [13], electricity price[3], and Inflation [19]. However, since it is difficult for them to consider complicate relations between features, e.g., cross feature, their performance are limited.
5.2 NonLinear Model
Nonlinear models are also adopted for timeseries forecasting. The most popular ones are Support Vector Machine (SVM), neural network, and treebased ensemble models. For example, SVM are applied to financial forecasting
[11] and wind speed forecasting [15]. Neural network are also used in financial marketing forecasting [2] and electric load forecasting [18]. Recently, Gradient Boosting Decision Tree (GBDT) are also adopted to forecast traffic flow [24].Moreover, model ensemble is also popular for timeseries forecasting. For example, ARIMA and SVM were combined to forecast stock price [17]. Hybrid ARIMA and NN models were also used for timeseries forecasting [25, 5].
In this paper, we do not focus on the choices of regression models. Instead, based on our observation, we focus on extracting seasonality information and transforming label for better forecasting performance. Our proposed seasonality extraction and label distribution transform can be applied into most forecasting models, including NN and GBDT.
6 Conclusions
In this paper, we studied the case of retailers’ sales forecasting on Tmall—the world’s leading online B2C platform. We first observed sales seasonality after we group different categories of retailers and Tweedie distribution after we transform the sales. We then designed two mechanisms, i.e., seasonality extraction and distribution transform, for sales forecasting. For seasonality extraction, we first adopted clustering method to group the retailers so that each group of retailers have similar features, and then applied Fourier transform to automatically extract the seasonality for retailers in different groups. For distribution transform mechanism, we used Tweedie loss for regression instead of other losses that do not fit the real sale distribution. Our proposed two mechanisms can be used as addons to classic regression models, and the experimental results showed that both mechanisms can significantly improve the forecasting results.
References
 [1] (2010) An empirical comparison of machine learning models for time series forecasting. Econometric Reviews 29 (56), pp. 594–621. Cited by: 1st item, §4.2.
 [2] (1994) Neural network time series forecasting of financial markets. John Wiley & Sons, Inc.. Cited by: §5.2.
 [3] (2009) Electricity consumption forecasting in italy using linear regression models. Energy 34 (9), pp. 1413–1421. Cited by: §5.1.
 [4] (2015) Time series analysis: forecasting and control. John Wiley & Sons. Cited by: §1.
 [5] (2010) Wind speed forecasting in three different regions of mexico, using a hybrid arima–ann model. Renewable Energy 35 (12), pp. 2732–2738. Cited by: §5.2.
 [6] (2008) Application of machine learning techniques for supply chain demand forecasting. European Journal of Operational Research 184 (3), pp. 1140–1154. Cited by: §4.2.
 [7] (2001) Greedy function approximation: a gradient boosting machine. Annals of statistics, pp. 1189–1232. Cited by: 5th item.
 [8] (2009) Multiple time series. Vol. 38, John Wiley & Sons. Cited by: §5.1.
 [9] (1987) Exponential dispersion models. Journal of the Royal Statistical Society. Series B (Methodological), pp. 127–162. Cited by: §3.2.
 [10] (2009) Dayahead wind speed forecasting using farima models. Renewable Energy 34 (5), pp. 1388–1393. Cited by: §5.1.
 [11] (2003) Financial time series forecasting using support vector machines. Neurocomputing 55 (12), pp. 307–319. Cited by: §5.2.
 [12] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
 [13] (2012) Seasonal arima for forecasting air pollution index: a case study. American Journal of Applied Sciences 9 (4), pp. 570–578. Cited by: §5.1.
 [14] (2005) Clustering of time series data—a survey. Pattern recognition 38 (11), pp. 1857–1874. Cited by: §3.1.

[15]
(2014)
Shortterm wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm
. Renewable Energy 62, pp. 592–597. Cited by: §5.2.  [16] (2000) The m3competition: results, conclusions and implications. International journal of forecasting 16 (4), pp. 451–476. Cited by: §1.
 [17] (2005) A hybrid arima and support vector machines model in stock price forecasting. Omega 33 (6), pp. 497–505. Cited by: §5.2.
 [18] (1991) Electric load forecasting using an artificial neural network. IEEE transactions on Power Systems 6 (2), pp. 442–449. Cited by: §5.2.
 [19] (2006) Shortterm forecasting of inflation in croatia with seasonal arima processes. Technical report Cited by: §5.1.
 [20] (2008) Trend time–series modeling and forecasting with neural networks. IEEE Transactions on neural networks 19 (5), pp. 808–816. Cited by: 1st item.
 [21] (2003) Use of local linear regression model for shortterm traffic forecasting. Transportation Research Record: Journal of the Transportation Research Board (1836), pp. 143–150. Cited by: §5.1.
 [22] (1984) An index which distinguishes between some important exponential families. In Statistics: Applications and new directions: Proc. Indian statistical institute golden Jubilee International conference, pp. 579–604. Cited by: §3.2, §3.2.
 [23] (2017) Insurance premium prediction via gradient treeboosted tweedie compound poisson models. Journal of Business & Economic Statistics, pp. 1–15. Cited by: §3.2, §3.2.
 [24] (2017) Traffic flow forecasting method based on gradient boosting decision tree. Cited by: §5.2.
 [25] (2003) Time series forecasting using a hybrid arima and neural network model. Neurocomputing 50, pp. 159–175. Cited by: §5.2.
 [26] (2017) KunPeng: parameter server based distributed learning systems and its applications in alibaba and ant financial. In SIGKDD, pp. 1693–1702. Cited by: 5th item.
Comments
There are no comments yet.