An interesting phenomenon in many resource allocation decisions in marketing is that, at the decision unit level, the data are still very sparse, despite the size of the overall data. To leverage the rest of the information in the big data, hierarchical Bayes (HB) (Gelman et al., 2013; Rossi et al., 2005) provides a natural solution by statistically borrowing information with a shrinkage-based estimation at the individual unit level. There are two challenges when applying a HB model. First, the hierarchy structure needs to be determined in advance, which could be a challenge especially when the data do not possess a clear hierarchical affiliation relationship. Second, in practice, researchers tend to use only two or three levels for HB models because, for a fully Bayesian analysis, simulation-based approaches are necessary to obtain the joint posterior distribution. If there are too many levels, the model could be computationally expensive and very sensitive to the distribution assumptions and priors when applied to real-world data in order to converge.
In this paper, we develop a new model that dynamically determines the hierarchy based on the input data. Meanwhile, by adopting empirical Bayes (Casella, 1985), we present an empirical approach to get inferences through the hierarchical structure. We show a two-phase system where flexible multi-level hierarchical models with deep hierarchy can be applied efficiently. Inspired by the loss concept in tree models (e.g., CART (Breiman et al., 1984)
), we propose a Dynamic Hierarchical Empirical Bayesian (DHEB) method that is capable of dynamically constructing the hierarchy. Specifically, each sub region in a layer of the hierarchy is treated as a node. The challenge is to find a natural way to merge the idea of loss function into the HB framework so that the estimates derived by the HB model are consistent with the optimal solutions for the loss function. To do so, we propose a loss function with a regularization term that incorporates the Bayesian concept of prior(Rasmussen and Williams, 2006). More details can be found in section 4.2. Given the loss function, instead of a fully Bayesian analysis, we present a stepwise method that practices empirical Bayes and builds a hierarchy dynamically from top to bottom. This proposed methodology combines the advantages of both (1) the hierarchical Bayesian model, which allows information borrowing from similar branches, and (2) a tree model, which helps define the structure using data.
The performance of the proposed method is evaluated using a set of simulated data and real-world data from Adobe Advertising Cloud. We compare the proposed method with baseline models: weighted average, regularized linear regression, and fully HB models with different levels. All of the comparisons favor the proposed method against all its competitors in terms of prediction accuracy and efficiency.
The rest of the paper is organized as follows. We will first describe the process of the sponsored search and the challenges faced when evaluating ads performance in section 2, followed by some related work in section 3. We then introduce our proposed method in section 4. Section 5 and section 6 provide the simulation and experimental results. Finally, we conclude the paper in section 7.
2.1. Sponsored Search
Sponsored search advertising is a kind of auction-based keyword advertising in search engines (Lahaie et al., 2007). Search engines decide the winners of the auctions based on their expected revenue. Meanwhile, advertisers need to understand what keywords are more valuable using performance measurements, such as number of impressions, click-through rate (CTR), conversion rate (CVR), revenue per click (RPC), cost per click (CPC), etc., so that they can manage their bids efficiently and allocate their budgets accordingly. Here, revenue is defined by advertisers’ goals, which can be dollar revenue, number of orders, number of subscriptions, and so on. The winning ads are charged by user clicks, meaning that advertisers only pay when their ads are clicked by users.
Search engines provide platforms for advertisers to manage their bids and apply targeting and budgeting decisions. Figure 1 illustrates a typical hierarchical structure of bid management. Advertisers first create an account and construct several campaigns in the account. For each campaign, advertisers can group keywords and ads into ad groups for targeting and management purposes. Ads are often shared by keywords in a common ad group. For each keyword, advertisers can also determine the matching types used between keywords and search queries, such as “broad match,” “exact match,” and “other” match types. Advertisers can set targeting criteria using geographic and demographic information at the ad group or campaign level.
2.2. RPC Prediction
In this paper, we focus on RPC prediction from the advertisers’ perspective. First, we define “bid units” as the atomic units at which advertisers set their bids. Bid units are different from keywords because the same keywords can be targeted in multiple ad groups or campaigns and set with different bids. For example, in Figure 1, we consider “Keyword 1 + Match Type 1” under “Ad Group 1” as a bid unit and “Keyword 1 + Match Type 1” under “Ad Group 2” as another bid unit. The performance data we collect on the advertisers’ side contain daily impressions, clicks, conversions and attributed revenue at the bid-unit level, and we remove the records with zero clicks because our goal is to predict the RPC for each bid unit. The problem is that, given the historical clicks and revenue data , we want to predict the next day’s RPC for bid unit . The features we can utilize are the hierarchical structure information of the bid units, such as corresponding campaigns, ad groups, and keywords, as well as some upper level variables. Here, upper level variables refer to the information above the bid-unit level, such as geo targeting at the campaign level, which is shared by the bid units under each campaign.
A well-known challenge in the RPC prediction problem is that, at the bid-unit level, the data are very sparse. From the perspective of users’ behaviors, the sparsity challenge is twofold. First, for a large number of bid units, only a small number of days record non-zero clicks. We name the sparsity of clicks as -sparsity. Second, among all the bid units that are clicked, the majority does not generate any revenue for the advertiser. This sparsity of revenue is denoted as -sparsity. To further illustrate this phenomenon, we examine one month of data for a client of Adobe Advertising Cloud. The average -sparsity and -sparsity are about 90% and 98%, meaning only 10% of the dates collect click data and among the dates with click data, about 98% have zero revenue. Thus, if we build models at the bid-unit level by pushing down the upper level variables, we tend to generate zero RPC predictions for most bid units. These sparse predictions are undesirable for online advertising for several reasons. First, the bid units have potentials. Previous records of value zero do not necessarily mean the following day still bears a zero, and these potentials would be fully ignored by sparse predictions, leading to an overfitting model. Second, sparse predictions do not help distinguish the bid units if limited resources need to be allocated to them.
3. Related Work
Although RPC is a vital metric in advertiser bidding decisions, the RPC-related literature is limited, partly because of the confidentiality of revenue data. Among the few existing studies, the work most related to our study is (Sodomka et al., 2013), which proposed a hierarchical model for predicting value per click, where the hierarchy is fixed a priori and defined by ad group, campaign, and account. A linear model is used at each layer, and the aggregated loss is minimized. On the other hand, extensive literature has studied CTR and CVR predictions and offered some attempts to utilize the data hierarchies/clusters in addressing data sparsity. Among those few attempts, (Agarwal et al., 2010) assumed a predefined advertiser-publisher pair hierarchy and built a Poisson log-linear model for each node. Using the same data hierarchy, (Agarwal et al., 2007)
proposed a tree-structured Markov model. Other than linear regression,(Menon et al., 2011) modeled CTR from a collaborative filtering perspective of view. In addition to the preexisting advertiser hierarchy and publisher hierarchy, (chih Lee et al., 2012) also considered clustering user-level information by grouping data within a specified Euclidean distance.
To the best of our knowledge, all existing methods require a predetermined hierarchy, a priori using the data structure and feature set in the data, which becomes a challenge when more user-defined features are involved. Our study provides a methodology that determines the hierarchical structure using information in the data so that the structure can be determined layer by layer during the model estimation process. Another contribution of our study to the literature is that the existing methods allow the child nodes to borrow information from their parents, mostly by combining the mean values of the parents and the children while ignoring the uncertainty of the mean values. In this paper, we propose a new method that allows the uncertainty to be incorporated before combining these values from parent or child nodes.
In this section, we present the proposed methodology in detail. For illustration, we first demonstrate how a two-level Bayesian regression model can be utilized in the RPC prediction problem in section 4.1. Then, we introduce hierarchical shrinkage loss (HSL) for determining the hierarchy empirically in section 4.2. We finish our discussion of the proposed DHEB method in section 4.3.
4.1. Two-level Hierarchical Bayes
For each bid unit , we denote its RPC
as a random variable. Then we construct a linear regression model:
where and are historical number of clicks and revenue, respectively, and . Our goal is to make an estimation for for each bid unit. Under the Bayesian framework, we assume a prior distribution of parameter , then combine the prior with the likelihood function to yield a posterior. Assume has a normal prior distribution:
where , are pre-specified hyper-parameters. Given the likelihood , where is an identity matrix, the posterior for is:
By applying the same prior distribution for all s and using the posterior mean as the predicted RPC for each bid unit, we get non-sparse predictions that contain information borrowed by incorporating a prior distribution. This prior information can be obtained by empirical Bayes leveraging the overall data.
For data containing more features, a multi-level hierarchical Bayesian method is required to enable the propagation of information across the hierarchical structure and allow for information sharing among subgroups related in the hierarchy. For example, bid units in the same ad groups may intuitively perform more similarly; thus, it makes more sense for them to share the same prior distribution. We fix the bottom level of the hierarchy to be the bid-unit level in order to differentiate the various bid units. The question now is determining the appropriate intermediate levels as shown in the top row of Figure 2. In a conventional hierarchical Bayesian model, the hierarchy is predetermined by domain knowledge. In our application, although there is a hierarchical structure for bid management as we introduced in section 2.1, issues still exist when trying to set the hierarchy involving features without a natural hierarchy. For example, under each ad group, advertisers set multiple keywords to target, which indicates that we can create a hierarchy with “Keyword” under “Ad Group.” Nevertheless, a common keyword can also appear in different ad groups targeting different types of customers. In this case, it is reasonable to put “Ad Group” under “Keyword” as well. This situation then calls for a data-driven approach to determine the hierarchy structure for the HB model.
4.2. Hierarchical Shrinkage Loss
Intuitively, it is similar to tree splitting using categorical variables in tree models, which grows a tree according to a certain predefined loss. In the interest of visualization and brevity, we use the terminology “node” as in tree models. The root node contains the whole population with all bid units. If we use “Keyword” as the first splitting feature and there areunique keywords in the data, then the root node will be split to child nodes, with each containing the bid units that share the same keyword. For bid units in each child node, we estimate the same RPC for them, and we assume that child nodes under a common parent node share the same prior distribution; thus, we use the term “parent information” to represent the “prior information.” Based on the observation of the posterior mean (1), which is a weighted average of parent information and information of itself, we develop the hierarchical shrinkage loss (HSL):
where denotes the parent node; denotes the child nodes of when splitting by feature ; and represent the RPC predictions in child node and parent node , respectively; and are the data in child node ; and are functions measuring the within-node loss and loss to the parent node; and represent the importance of the two losses; and is a scalar function that transforms x to the order of interest.
There are two terms in HSL: the first measures the weighted information loss within each child node, and the second considers the discrepancy between the estimators of the child node and the parent node. The estimator of each child node then considers not only the data within itself, but also the information of its parent, who also inherits from its parent according to the hierarchy. This additional hierarchy information leads to a more stable model as information from a larger subgroup is used.
4.3. Dynamic Hierarchical Empirical Bayes
In this section, we illustrate how DHEB builds a hierarchy using HSL. In the multi-level hierarchical Bayesian method, it is assumed that the parameters of the child nodes under the same parent node are from a common prior distribution and the prior information flows through the hierarchy. In a fully Bayesian analysis, a complete joint posterior distribution is generated according to the predetermined hierarchy, and simulations are usually applied to get inferences. This process can be computationally expensive. Instead of a fully Bayesian analysis, we employ empirical Bayes to grow the hierarchy from top to bottom. The proposed method not only provides a method for determining the hierarchy, but also presents an efficient way to get inferences.
We illustrate how to construct a loss function to choose the splitting features for the intermediate levels using an example in the bottom row of Figure 2. Suppose we are in the node “Keyword 1” and want to decide which feature to use for the further subdivision. Assume we use “Geo” as the splitting feature and split the data for each “Geo” as a child node . Here, we use to differentiate from bid unit in section 4.1. Similar to section 4.1, we assume all RPCs s under “Keyword 1” across different “Geo’s” are related and generated from a common prior distribution, which is . Then the posterior distribution of for each “Geo” node is , where
Using the posterior mean as an estimate for in each child node, we can construct a loss function by degenerating (3) to the current layer as follows:
where the generic functions in (3) are
, , , , and , with node denoted as for short.
The optimal solution of would be . Function represents the difference between the parameters of the child nodes and the OLS estimations based on the sample data. Function measures the difference between the parameters of the child nodes and parent node, which is represented by the prior mean. The weights of the two losses and
are inversely proportional to the variance of the OLS estimator and prior variance. The basic idea is intuitive: If the prior variance is larger, it provides noisier information regarding theestimates and, hence, its contribution is smaller than the case when the prior variance is smaller. Similarly, if the sample data are divergent and noisy, they will get less weight. , where is the number of observations in the node. We multiply the loss for each child node by the number of observations in the node because we shrink the loss to one node level by and . In order to make the losses for different splitting features comparable, we calculate the loss at the individual observation level and treat the loss at one node level as a representation for all the observations in this node.
Once we have the loss function, we can decide which feature to use for partition as
Suppose we choose “Geo” for the second level and we need to decide the splitting variable for the third level. We assume the posterior distribution of as the prior distribution of under “Geo ” and apply the same method recursively, which is (Figure 2, bottom row right).
To get loss (6), both prior distribution of and regression variance are assumed known; therefore, sample data should be used to get estimations. For prior distribution, only the parameters in the root node are necessary because the posterior of the parent node would be used as prior for its child nodes. Empirical Bayes can be applied when there is a lack of prior knowledge. Here, we give an example by using the sample mean as the prior mean and weighted sample variance as the prior variance: , , where denotes total historical data for all bid units. The variance needs to be estimated in each node which can be given by: , , where is the OLS estimator and is the number of observations for node .
Another problem is when to stop splitting. Here, we propose a stopping criterion:
where and , denoting the sum of squared errors for the parent node and child nodes . This means a node will stop growing when the total sum of squared errors does not decrease by a certain ratio .
The final step would be attaching the bid-unit level to the bottom of the chosen hierarchy. The procedure loops the leaf nodes of the hierarchy and subdivides them into child nodes, with each node containing the data for a specific bid unit.
The proposed DHEB also provides an approach to get inferences through a hierarchy. If we have a fixed hierarchy, we can apply equations (4) and (5) to get stepwise posterior distributions from the root to bottom levels and then obtain inferences.
5. Simulation Results
In this section, we evaluate the proposed method on several simulated datasets. On each dataset, we conduct an analysis using 6 models:
Weighted average (WA): The predicted RPC of each bid unit is the weighted average of historical RPC using the number of clicks as weights.
Regularized linear regression (RLR): This fits a regularized linear regression by pushing down all the upper level features.
Two-level HB (2HB): This model was discussed in section 4.2. The hierarchy is “Root - Bid Unit.”
Three-level HB (3HB): This model first predefines a three-level fixed hierarchy. Then we use Rstan (Gelman et al., 2018) to do the posterior sampling and treat the posterior means at the bottom level as predictions. We limit the hierarchy to three levels because the more levels we have, the more computationally expensive the model is and, when using real-world data, many assumptions may not be satisfied, which makes the simulation difficult to converge.
Multi-level fixed hierarchical empirical Bayes with true hierarchy (FHEB): We fix the hierarchy as the true hierarchy used during the simulation and apply the same inference approach as the proposed method.
DHEB: This is our proposed method.
We apply these six models on a set of simulated data. Data are generated by the procedure as follows:
Create 100 bid units and 4 upper level features (named A, B, C, and D); each feature has 10 to 20 categories. Set the date range to 6 months (i.e., from “2017-01-01” to “2017-06-30”).
Assume the implicit hierarchy is A - B - C - D - Bid Unit.
Set the top prior mean and variance .
For nodes in the intermediate levels, generate the mean for child nodes from , which is the parent node distribution. The variance of child nodes is predetermined. Here, we just use the same variance as .
For the bottom bid-unit level, we apply (4) to generate and set RPC for this bid unit as . We generate a list of clicks with length , then revenue , where and is predetermined.
-sparsity is determined by , which is the number of observations we generated in (5). The -sparsity is higher as is smaller because we fix the date range. -sparsity is denoted by , and we will randomly set of the revenue to be zero.
We apply 9 pairs of and combinations and generate 10 datasets for each combination.
We use two months of data to predict RPCs for next day and test in a rolling way for 30 days. The performance metric is:
where is the number of testing days; is the number of bid units on day ; is the true data for bid unit on day ; and is the predicted RPC. For 3HB, we apply a hierarchy as “Root - A - Bid Unit”; for FHEB, we deploy the true hierarchy. Because WA is a baseline method, we calculate the improvement of the other 5 models relative to WA. The improvement is represented by the reduced percentage of AVG-MSE and would be negative if the AVG-MSE of WA is smaller. The top row of Figure 3 shows the comparison of the 5 models for different and . As we can see, for different combinations of and , FHEB outperforms all the other models, with DHEB ranking second. As both -sparsity and -sparsity increase, the benefits obtained from FHEB and DHEB become greater. For time complexity, we plot the ratio between the running time of the other 5 models and WA, as shown in the bottom row of Figure 3. For 3HB, it takes a much longer time to do the sampling, and the model does not perform best due to the limited number of levels. In practice, if we are confident about what the true hierarchy is, FHEB provides an approach to get predictions without worrying about time complexity. If we are not sure about how to build the hierarchy, DHEB can determine the hierarchy empirically and give desirable predictions.
6. Experimental Results
6.1. Model Comparison
The data come from multiple online advertising campaigns owned by a common advertiser. In the bid-unit level, the performance data contain the number of clicks and collected revenue, ranging from “2017-09-01” to “2017-12-23”. In addition, the data record the structural features of the keywords as shown in Table 1. Here, instead of regular hierarchical features, we introduce “Day of Week,” which provides an additional group for the daily data by indicating whether the day is Monday, Tuesday, etc. “Geo” represents geo targeting for the campaigns; it only has one unique category in this dataset. When a hierarchy is established in a hierarchical model, some features have a natural relationship with each other, such as “Search Engine,” “Account,” “Campaign,” and “Ad Group.” However, for “Geo,” “Keyword,” “Match Type,” and “Day of Week,” it is hard to determine their positions and order. In addition, it may not be necessary to include all structural features in the hierarchy.
|Day of Week||DOW||7|
We compare the 6 models and use the same evaluation metric as in section 5. For 3HB, we apply a hierarchy as “Root - Campaign - Bid Unit”; for FHEB, we chose a hierarchy by domain knowledge, which is “Root - Search Engine - Account - Campaign - Ad group - Keyword - Match Type - Day of Week - Bid Unit.” We dropped “Geo” because it is unique. In Figure 4, the left plot shows the AVG-MSE improvement compared with WA. As we can see, DHEB outperforms other methods. The middle plot demonstrates the time complexity compared with WA. 3HB takes a much longer time due to the sampling process.
6.2. Two-phase System
Figure 5 shows the hierarchy trained on several testing days. As we can see, the hierarchy does not change frequently over time, which makes sense as there is only one day difference between the training data for two consecutive days. There are three modules: (1) data collection: obtaining bid units features, the daily number of clicks, and revenue in history; (2) model training: training the DHEB model and building a hierarchy; (3) prediction serving: giving RPC prediction based on the hierarchy determined in model training. Module (2) is the most time-comsuming part. We separate these three modules into offline and online phases as shown in Figure 6, where in offline phase, we do model training and in online phase, we do prediction serving based on the trained hierarchy. Given the fact that the hierarchy determined by DHEB does not change a lot in a short period, we schedule the offline phase in a low frequency and run the online phase in real time. We introduce parameter as the period of the offline phase. means we run the offline phase every day, means every other day, and so on.
The right plot in Figure 4 consists of AVG-MSE improvement compared to for different values of . An appropriate would be 4, which will reduce the time complexity without making many sacrifices in model accuracy.
In this paper, we propose a Dynamic Hierarchical Empirical Bayesian (DHEB) method to build a multi-level hierarchical model to overcome the sparsity challenge in online advertising data. The proposed method provides a way to choose hierarchical levels by incorporating a loss function, such as the function used in tree models. The method is also equipped with an empirical Bayesian approach to get inferences through a hierarchy. It is applicable in many practical problems where data are sparse and hierarchical structure can be leveraged to obtain shrinkage-based estimations. In addition, the proposed regularized loss function can be applied in traditional tree models as well as other tree-based methods, as an approach to borrow information from the parent node in order to deal with data sparseness. We also present a two-phase system which can serve prediction in real time.
- Agarwal et al. (2010) Deepak Agarwal, Rahul Agrawal, Rajiv Khanna, and Nagaraj Kota. 2010. Estimating rates of rare events with multiple hierarchies through scalable log-linear models. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Washington, DC, USA, 213–222.
- Agarwal et al. (2007) Deepak Agarwal, Andrei Zary Broder, Deepayan Chakrabarti, Dejan Diklic, Vanja Josifovski, and Mayssam Sayyadian. 2007. Estimating rates of rare events at multiple resolutions. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, San Jose, CA, USA, 16–25.
- Breiman et al. (1984) Leo Breiman, Jerome Friedman, Charles J. Stone, and R. A. Olshen. 1984. Classification and Regression Trees. Taylor & Francis.
- Casella (1985) George Casella. 1985. An introduction to empirical Bayes data analysis. The American Statistician 39, 2 (1985), 83–87. https://doi.org/10.2307/2682801
- chih Lee et al. (2012) Kuang chih Lee, Burkay Orten, Ali Dasdan, and Wentong Li. 2012. Estimating conversion rate in display advertising from past performance data. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Beijing, China, 768–776.
- Gelman et al. (2013) Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. 2013. Bayesian Data Analysis. CRC.
- Gelman et al. (2018) Andrew Gelman, Robert L. Grant, and Bob Carpenter etc. 2018. Rstan: the R interface to Stan. Retrieved 2018 from http://mc-stan.org/users/interfaces/rstan
- Lahaie et al. (2007) Sebastien Lahaie, David M. Pennock, Amin Saberi, and Rakesh V. Vohra. 2007. Sponsored Search Auctions. Cambridge University Press, 699–716. https://doi.org/10.1017/CBO9780511800481.030
- Menon et al. (2011) Aditya Krishna Menon, Krishna-Prasad Chitrapura, Sachin Garg, Deepak Agarwal, and Nagaraj Kota. 2011. Response prediction using collaborative filtering with hierarchies and side-information. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, San Diego, CA, USA, 141–149.
Carl Edward Rasmussen and
Christopher K. I. Williams.
Gaussian Processes for Machine Learning. The MIT Press.
- Rossi et al. (2005) Peter Rossi, Greg Allenby, and Robert McCulloch. 2005. Bayesian Statistics and Marketing. John Wiley & Sons Ltd.
- Sodomka et al. (2013) Eric Sodomka, Sebastien Lahaie, and Dustin Hillard. 2013. A predictive model for advertiser value-per-click in sponsored search. In Proceedings of the 22nd International Conference on World Wide Web. ACM, Rio de Janeiro, Brazil, 1179–1190.