deep_rl
Repository for deep reinforcement learning with OpenAI
view repo
Bidding optimization is one of the most critical problems in online advertising. Sponsored search (SS) auction, due to the randomness of user query behavior and platform nature, usually adopts keyword-level bidding strategies. In contrast, the display advertising (DA), as a relatively simpler scenario for auction, has taken advantage of real-time bidding (RTB) to boost the performance for advertisers. In this paper, we consider the RTB problem in sponsored search auction, named SS-RTB. SS-RTB has a much more complex dynamic environment, due to stochastic user query behavior and more complex bidding policies based on multiple keywords of an ad. Most previous methods for DA cannot be applied. We propose a reinforcement learning (RL) solution for handling the complex dynamic environment. Although some RL methods have been proposed for online advertising, they all fail to address the "environment changing" problem: the state transition probabilities vary between two days. Motivated by the observation that auction sequences of two days share similar transition patterns at a proper aggregation level, we formulate a robust MDP model at hour-aggregation level of the auction data and propose a control-by-model framework for SS-RTB. Rather than generating bid prices directly, we decide a bidding model for impressions of each hour and perform real-time bidding accordingly. We also extend the method to handle the multi-agent problem. We deployed the SS-RTB system in the e-commerce search auction platform of Alibaba. Empirical experiments of offline evaluation and online A/B test demonstrate the effectiveness of our method.
READ FULL TEXT VIEW PDF
The majority of online display ads are served through real-time bidding ...
read it
We present LADDER, the first deep reinforcement learning agent that can
...
read it
Real-time bidding (RTB) based display advertising has become one of the ...
read it
In real world systems, the predictions of deployed Machine Learned model...
read it
Real-time bidding (RTB) is almost the most important mechanism in online...
read it
Online media provides opportunities for marketers through which they can...
read it
Online advertising platforms are thriving due to the customizable audien...
read it
Repository for deep reinforcement learning with OpenAI
Bidding optimization is one of the most critical problems for maximizing advertisers’ profit in online advertising. In the sponsored search (SS) scenario, the problem is typically formulated as an optimization of the advertisers’ objectives (KPIs) via seeking the best settings of keyword bids (Borgs et al., 2007; Feldman et al., 2007). The keyword bids are usually assumed to be fixed during the online auction process. However, the sequence of user queries (incurring impressions and auctions for online advertising) creates a complicated dynamic environment where a real-time bidding strategy could significantly boost advertisers’ profit. This is more important on e-commerce auction platforms since impressions more readily turn into purchases, compared to traditional web search. In this work, we consider the problem, Sponsored Search Real-Time Bidding (SS-RTB), which aims to generate proper bids at the impression level in the context of SS. To the best of our knowledge, there is no publicly available solution to SS-RTB.
The RTB problem has been studied in the context of display advertising (DA). Nevertheless, SS-RTB is intrinsically different from RTB. In DA, the impressions for bidding are concerned with ad placements in publishers’ web pages, while in SS the targets are ranking lists of dynamic user queries. The key difference are: (1) for a DA impression only the winning ad can be presented to the user (i.e. a 0-1 problem), while in the SS context multiple ads which are ranked high can be exhibited to the query user; (2) In SS, we need to adjust bid prices on multiple keywords for an ad to achieve optimal performance, while an ad in DA does not need to consider such a keyword set. These differences render popular methods for RTB in DA, such as predicting winning market price (Wu et al., 2015) or winning rate (Zhang et al., 2014), inapplicable in SS-RTB. Moreover, compared to ad placements in web pages, user query sequences in SS are stochastic and highly dynamic in nature. This calls for a complex model for SS-RTB, rather than the shallow models often used in RTB for DA (Zhang et al., 2014).
One straightforward solution for SS-RTB is to establish an optimization problem that outputs the optimal bidding setting for each impression independently. However, each impression bid is strategically correlated by several factors given an ad, including the ad’s budget and overall profit, and the dynamics of the underlying environment. The above greedy strategy often does not lead to a good overall profit (Cai et al., 2017). Thus, it is better to model the bidding strategy as a sequential decision on the sequence of impressions in order to optimize the overall profit for an ad by considering the factors mentioned above. This is exactly what reinforcement learning (RL) (Mnih et al., 2013; Poole and Mackworth, 2010) does. By RL, we can model the bidding strategy as a dynamic interactive control process in a complex environment rather than an independent prediction or optimization process. The budget of an ad can be dynamically allocated across the sequence of impressions, so that both immediate auction gain and long-term future rewards are considered.
Researchers have explored using RL in online advertising. Amin et al.
constructed a Markov Decision Process (MDP) for budget optimization in SS
(Amin et al., 2012). Their method deals with impressions/auctions in a batch model and hence cannot be used for RTB. Moreover, the underlying environment for MDP is “static” in that all states share the same set of transition probabilities and they do not consider impression-specific features. Such a MDP cannot well capture the complex dynamics of auction sequences in SS, which is important for SS-RTB. Cai et al. developed a RL method for RTB in DA (Cai et al., 2017). The method combines an optimized reward for the current impression (based on impression-level features) and the estimate of future rewards by a MDP for guiding the bidding process. The MDP is still a static one as in
(Amin et al., 2012). Recently, a deep reinforcement learning (DRL) method was proposed in (Wang et al., 2017) for RTB in DA. Different from the previous two works, their MDP tries to fully model the dynamics of auction sequences. However, they also identified an “environment changing” issue: the underlying dynamics of the auction sequences from two days could be very different. For example, the auction number and the users’ visits could heavily deviate between days. A toy illustration is presented in Figure 1. Compared to a game environment, the auction environment in SS is itself stochastic due to the stochastic behaviors of users. Hence, the model learned from Day 1 cannot well handle the data from Day 2. Although (Wang et al., 2017) proposed a sampling mechanism for this issue, it is still difficult to guarantee we obtain the same environment for different days. Another challenge that existing methods fail to address is the multi-agent problem. That is, there are usually many ads competing with one another in auctions of SS. Therefore, it is important to consider them jointly to achieve better global bidding performance.Motivated by the challenges above, this paper proposes a new DRL method for the SS-RTB problem. In our work, we captured various discriminative information in impressions such as market price and conversion rate (CVR), and also try to fully capture the dynamics of the underlying environment. The core novelty of our proposed method lies in how the environment changing problem is handled. We solve this problem by observing the fact that statistical features of proper aggregation (e.g. by hour) of impressions has strong periodic state transition patterns in contrast to impression level data. Inspired by this, we design a robust MDP at the hour-aggregation level to represent the sequential decision process in the SS auction environment. At each state of the MDP, rather than generating bid prices directly, we decide a bidding model for impressions of that hour and perform real-time bidding accordingly. In other words, the robust MDP aims to learn the optimal parameter policy to control the real-time bidding model. Different from the traditional “control-by-action” paradigm of RL, we call this scheme “control-by-model”. By this control-by-model learning scheme, our system can do real-time bidding via capturing impression-level features, and meanwhile also take the advantage of RL to periodically control the bidding model according to the real feedback from the environment. Besides, considering there are usually a considerable number of ads, we also design a massive-agent learning algorithm by combining competitive reward and cooperative reward together.
The contribution of this work is summarized as follows: (1) We propose a novel research problem, Sponsored Search Real-Time Bidding (SS-RTB), and properly motivate it. (2) A novel deep reinforcement learning (DRL) method is developed for SS-RTB which can well handle the environment changing problem. It is worth to note that, the robust MDP we proposed is also a general idea that can be applied to other applications. We also designed an algorithm for handling the massive-agent scenario. (3) We deploy the DRL model in the Alibaba search auction platform, one of the largest e-commerce search auction platforms in China, to carry out evaluation. The offline evaluation and standard online A/B test demonstrate the superiority of our model.
In this section, we briefly review two fields related to our work: reinforcement learning and bidding optimization.
In reinforcement learning (RL) theory, a control system is formulated as a Markov Decision Process (MDP). A MDP can be mathematically represented as a tuple ¡¿, where and represent the state and action space respectively, denotes the transition probability function, and denotes the feedback reward function. The transition probability from state to by taking action is . The reward received after taking action in state is . The goal of the model is to learn an optimal policy (a sequence of decisions mapping state to action ), so as to maximize the expected accumulated long term reward.
Remarkably, deep neural networks coupled with RL have achieved notable success in diverse challenging tasks: learning policies to play games
(Silver et al., 2016; Mnih et al., 2013; Mnih et al., 2015), continuous control of robots and autonomous vehicles (Levine and Abbeel, 2014; Gu et al., 2016; Hafner and Riedmiller, 2011), and recently, online advertising (Cai et al., 2017; Wang et al., 2017; Amin et al., 2012). While the majority of RL research has a consistent environment, applying it to online advertising is not a trivial task since we have to deal with the environment changing problem mentioned previously. The core novelty of our work is that we propose a solution which can well handle this problem.Moreover, when two or more agents share an environment, the performance of RL is less understood. Theoretical proofs or guarantees for multi-agent RL are scarce and only restricted to specific types of small tasks (Schwartz, 2014; Busoniu et al., 2008; Tampuu et al., 2017). The authors in (Tampuu et al., 2017) investigated how two agents controlled by independent Deep Q-Networks (DQN) interact with each other in the game of Pong. They used the environment as the sole source of interaction between agents. In their study, by changing reward schema from competition to cooperation, the agents would learn to behave accordingly. In this paper, we adopt a similar idea to solve the multi-agent problem. However, the scenario in online advertising is quite different. Not only the agent number is much larger, but also the market environment is much more complex. No previous work has explored using cooperative rewards to address the multi-agent problem in such scenarios.
In sponsored search (SS) auctions, bidding optimization has been well studied. However, most previous works focused on the keyword-level auction paradigm, which is concerned with (but not limited to) budget allocation (Borgs et al., 2005; Feldman et al., 2007; Muthukrishnan et al., 2007), bid generation for advanced match (Even Dar et al., 2009; Broder et al., 2011; Fuxman et al., 2008), keywords’ utility estimation (Borgs et al., 2007; Kitts and Leblanc, 2004).
Unlike in SS, RTB has been a leading research topic in display advertising (DA) (Yuan et al., 2013; Wang and Yuan, 2015). Different strategies of RTB have been proposed by researchers (Zhang et al., 2014; Wu et al., 2015; Cai et al., 2017; Chen et al., 2011; Lee et al., 2013). In (Zhang et al., 2014), the authors proposed a functional optimization framework to learn the optimal bidding strategy. However, their model is based on an assumption that the auction winning function has a consistent concave shape form. Wu et al. proposed a fixed model with censored data to predict the market price in real-time (Wu et al., 2015). Although these works have shown significant advantage of RTB in DA, they are not applicable to the SS context due to the differences between SS and DA discussed in Section 1.
In addition to these prior studies, recently, a number of research efforts in applying RL to bidding optimization have been made (Amin et al., 2012; Wang et al., 2017; Cai et al., 2017). In (Amin et al., 2012), Amin et al. combined the MDP formulation with the Kaplan-Meier estimator to learn the optimal bidding policy, where decisions were made on keyword level. In (Cai et al., 2017), Cai et al. formulated the bidding decision process as a similar MDP problem, but taking one step further, they proposed a RTB method for DA by employing the MDP as a long term reward estimator. However, both of the two works considered the transition probabilities as static and failed to capture impression-level features in their MDPs. More recently, the authors in (Wang et al., 2017) proposed an end-to-end DRL method with impression-level features formulated in the states and tried to capture the underlying dynamics of the auction environment. They used random sampling to address the environment changing problem. Nevertheless, random sampling still cannot guarantee an invariant underlying environment. Our proposed method is similar to that of (Wang et al., 2017) in that we use a similar DQN model and also exploit impression-level features. However, the fundamental difference is that we propose a robust MDP based on hour-aggregation of impressions and a novel control-by-model learning scheme. Besides, we also try to address the multi-agent problem in the context of online advertising, which has not been done before.
In this section, we will mathematically formulate the problem of real-time bidding optimization in sponsored search auction platforms (SS-RTB).
In a simple and general scenario, an ad has a set of keyword tuples , , , , where each tuple can be defined as ¡¿. Typically, the here is preset by the advertiser. The process of an auction could then be depicted as: everytime a user visits and types a query, the platform will retrieve a list of relevant keyword tuples, [, , , ]^{1}^{1}1Typically, an ad can only have one keyword tuple (i.e. the most relevant one) in the list, from the ad repository for auction. Each involved ad is then assigned a ranking score according to its retrieved keyword tuple as . Here, is obtained from factors such as relevance and personalization, etc. Finally, top ads will be presented to the user .
For SS-RTB, the key problem is to find another rather than for the matched keyword tuples during real-time auction, so as to maximize an ad’s overall profit. Since we carry out the research in an e-commerce search auction platform, in the following we will use concepts related to the e-commerce search scenario. Nevertheless, the method is general and could be adapted to other search scenarios. We define an ad’s goal as maximizing the purchase amount as income in a day , while minimizing the cost as expense in , with a constraint that the should not be smaller than the advertiser’s expected value . We can formulate the problem as:
(1) | ||||
Observing that has highly positive correlation with , we can change it to:
(2) | ||||
Eq. (2) is equivalent to Eq. (1) when is positively correlated with (as is the usual case). We omit the proof due to space limitation. The problem we study in this paper is to decide in real-time for an ad in terms of objective (2).
Based on the problem we defined in section 3, we now formulate it into a sketch model of RL:
State : We design a general representation for states as = ¡, , ¿, where denotes the budget left for the ad, denotes the step number of the decision sequence, and
is the auction (impression) related feature vector that we can get from the advertising environment. It is worth to note that, for generalization purpose, the
here is not the budget preset by the advertiser. Instead, it refers to the cost that the ad expects to expend in the left steps of auctions.Action : The decision of generating the real-time bidding price for each auction.
Reward : The income (in terms of ) gained according to a specific action under state .
Episode : In this paper, we always treat one day as an episode.
Finally, our goal is to find a policy which maps each state to an action , to obtain the maximum expected accumulated rewards: . is the set of discount coefficients used in a standard RL model (Sutton and Barto, 1998).
Due to the randomness of user query behavior, one might never see two auctions with exactly the same feature vector. Hence, in previous work of online advertising, there is a fundamental assumption for MDP models: two auctions with similar features can be viewed as the same (Amin et al., 2012; Wang et al., 2017; Cai et al., 2017). Here we provide a mathematical form of this assumption as follows.
Two auction and can be a substitute to each other as they were the same if and only if they meet the following condition:
This kind of substitution will not affect the performance of a MDP-based control system.
However, the above sketch model is defined on the auction level. This cannot handle the environment change problem discussed previously (Figure 1). In other words, given day 1 and day 2, the model trained on cannot be applied to since the underlying environment changes. In the next, we present our solution to this problem.
Our solution is inspired by a series of regular patterns observed in real-data. We found that, by viewing the sequences of auctions at an aggregation level, the underlying environments of two different days share very similar dynamic patterns.
We illustrate a typical example in Figure 2, which depicts the number of clicks of an ad at different levels of aggregation (from second-level to hour-level) in Jan. 28th, 2018 and Jan. 29th, 2018 respectively. It can be observed that, the second-level curves does not exhibit a similar pattern (Figure 2(a) and (b)), while from both minute-level and hour-level we can observe a similar wave shape. In addition, it also suggests that the hour-level curves are more similar than those of minute-level. We have similar observations on other aggregated measures.
The evolving patterns of the same measure are very similar between two days at hour-level, indicating the underlying rule of this auction game is the same. Inspired by these regular patterns, we will take advantage of hour-aggregated features rather than auction-level features to formulate the MDP model. Intuitively, if we treat each day as a 24 steps of auction game, an episode of any day would always have the same general experiences. For example, it will meet an auction valley between 3:00AM to 7:00AM, while facing a heavy competitive auction peak at around 9:00AM and a huge amount of user purchases at around 8:00PM. We override the sketch MDP model as follows.
State Transition. The auctions of an episode will be grouped into ( in our case) groups according to the timestamp. Each group contains a batch of auctions in the corresponding period. A state is re-defined as ¡¿, where is the left budget, is the specific time period, denotes the feature vector containing aggregated statistical features of auctions in time period , e.g. number of click, number of impression, cost, click-through-rate (CTR), conversion rate (CVR), pay per click (PPC), etc. In this way, we obtain an episode with fixed steps . In the following, we show that the state transition probabilities are consistent between two days.
Suppose given state and action , we observe the next state . We rewrite the state transition probability function as
(3) | ||||
where upper case letters represent random variables and
is the cost of the step corresponding to the action . Since is only affected by which only depends on the action , Eq. (3) could be rewritten as:(4) |
Because is also designed as a feature of , Eq. (4) then becomes:
(5) | ||||
where each represents an aggregated statistical feature and we use the property that features are independent with one another. By inspecting auctions from adjacent days, we have the following empirical observation.
Let be the aggregated value of feature at step . When and the action are fixed, will meet:
Where is the sample mean of when and are fixed, and is a small value that meets .
This indicates that the aggregated features change with very similar underlying dynamics within days. When and are fixed, for any two possible values and of , we have:
(6) | ||||
According to Assumption 1, we can deem any possible value of to be the same, which means . According to (5), finally we get:
(7) |
This means the state transition is consistent among different days, leading to a robust MDP.
Action Space. With established state and transition, we now need to formulate the decision action. Most previous works used reinforcement learning to control the bid directly, so the action is to set bid prices (costs). However, applying the idea to our model would result in setting a batch cost for all the auctions in the period. It is hard to derive impression level bid price and more importantly, this cannot achieve real-time bidding.
Instead of generating bid prices, we take a control-by-model scheme: we deploy a linear approximator as the real-time bidding model to fit the optimal bid prices, and we utilize reinforcement learning to learn an optimal policy to control the real-time bidding model. Hence, the action here is the parameter control of the linear approximator function, rather than the bid decision itself.
Previous studies have shown that the optimal bid price has a linear relationship with the impression-level evaluation (e.g. CTR) (Perlich et al., 2012; Lee et al., 2012). In this paper, we adopt the predicted conversion rate (PCVR) as the independent variable in the linear approximator function for real-time bidding which is defined as:
(8) |
To sum up, the robust MDP we propose is modeled as follows:
state | |
---|---|
action | set |
reward | gained in one step |
episode | a single day |
In this work, we take a value-based approach as our solution. The goal is to find an optimal policy which can be mathematically written as:
(9) |
Where,
(10) |
(11) |
in Eq. (11) is the accumulated long term reward that need to be maximized. in Eq. (10) is the standard action value function (Sutton and Barto, 1998; Poole and Mackworth, 2010) which captures the expected value of given and . By finding the optimal function for each state iteratively, the agent could derive an optimal sequential decision.
By Bellman equation (Sutton and Barto, 1998; Poole and Mackworth, 2010), we could get:
(12) |
Eq. (12) reveals the relation of values between step and step , where the value denotes the optimal value for given and . For small problems, the optimal action value function can be exactly solved by applying Eq. (12) iteratively. However, due to the exponential complexity of our model, we adopt a DQN algorithm similar to (Mnih et al., 2013; Mnih et al., 2015) which employs a deep neural network (DNN) with weights to approximate
. Besides, we also map the action space into 100 discrete values for decision making. Thus, the deep neural network can be trained by minimizing the loss functions in the following iteratively:
(13) |
Where,
The core idea of minimizing the loss in Eq. (13) is to find a DNN that can closely approximate the real optimal function, using Eq. (12) as an iterative solver. Similar to (Mnih et al., 2013), the target network is utilized for stable convergence, which is updated by train network every C steps.
The corresponding details of the algorithm is presented in Algorithm 1. In addition to train network and target network, we also introduce an episode network for better convergence. Algorithm 1 works in 3 modes to find the optimal strategy: 1) Listening. The agent will record each ¡¿ into a replay memory buffer; 2) Training. The agent grabs a mini-batch from the replay memory and performs gradient descent for minimizing the loss in Eq. (13). 3) Prediction. The agent will generate an action for the next step greedily by the Q-network. By iteratively performing these 3 modes, an optimal policy could be found.
The model in Section 4.3 works well when there are only a few agents. However, in the scenario of thousands or millions of agents, the global performance would decrease due to competition. Hence, we proposed an approach for handling the massive-agent problem.
The core idea is to combine the private competitive objective with a public cooperative objective. We designed a cooperative framework: for each ad, we deploy an independent agent to learn the optimal policy according to its own states. The learning algorithm for each agent is very similar to Algorithm 1. The difference is, after all agents made decisions, each agent will receive a competitive feedback representing its own reward and a cooperative feedback representing the global reward of all the agents. The learning framework is illustrated in Figure 3.
In this subsection, we provide an introduction to the architecture of our system depicted by Figure 4
. There are mainly three parts: Data Processor, Distributed Tensor-Flow Cluster, Search Auction Engine.
Data Processor The core component of this part is the simulator, which is in charge of RL exploration. In real search auction platforms, it is usually difficult to obtain precise exploration data. On one hand, we cannot afford to perform many random bidding predictions in online system; on the other hand, it is also hard to generate precise model-based data by pure prediction. For this reason, we build a simulator component for trial and error, which utilizes both model-free data such as real auction logs and model-based data such as predicted conversion rate to generate simulated statistical features for learning. The advantage of the simulator is, auctions with different bid decision could be simulated rather than predicted, since we have the complete auction records for all ads. In particular, we don’t need to predict the market price, which is quite hard to predict in SS auction. Besides, for corresponding effected user behavior which can’t be simply simulated (purchase, etc), we use a mixed method with simulation and PCVR prediction, With this method, the simulator can generate various effects, e.g. ranking, clicks, costs, etc.
Distributed Tensor-Flow Cluster This is a distributed cluster deployed on tensor-flow. The DRL model will be trained here in a distributed manner with parameter servers to coordinate weights of networks. Since DRL usually needs huge amounts of samples and episodes for exploration, and in our scenario thousands of agents need to be trained parallelly, we deployed our model on 1000 CPUs and 40 GPUs, with capability of processing 200 billion sample instances within 2 hours.
Search Auction Engine The auction engine is the master component. It sends requests and impression-level features to the bidding model and get bid prices back in real-time. The bidding model, in turn, periodically sends statistical online features to and get from the decision generator optimal policies which are outputted by the trained Q-network.
Our methods are tested both by offline evaluation (Section 6) and via online evaluation (Section 7) on a large e-commerce search auction platform of Alibaba Corporation with real advertisers and auctions. In this section, we introduce the dataset, compared methods and parameter setting.
We randomly select 1000 big ads in Alibaba’s search auction platform, which on average cover 100 million auctions per day on the platform, for offline/online evaluation. The offline benchmark dataset is extracted from the search auction log for two days of late December, 2017. Each auction instance contains (but not limited to) the bids, the clicks, the auction ranking list and the corresponding predicted features such as PCVR that stand for the predicted utility for a specific impression. For evaluation, we use one day collection as the training data and the other day collection for test. Note that we cannot use the test collection directly for test since the bidding actions have already been made therein. Hence, we perform evaluation by simulation base on the test collection. Both collections contain over 100 million auctions. For online evaluation, a standard A/B test is conducted online. Over 100 million auction instances of the 1000 ads are collected one day in advance for training, and the trained models are used to generate real bidding decisions online.
In order to better evaluate the effectiveness of our single-agent model, we also conduct separate experiments on 10 selected ads with disjoint keyword sets, for both offline and online evaluation. Besides, we use the data processor in our system to generate trial-and-error training data for RL methods. In particular, 200 billion simulated auctions are generated by the simulator (described in Section 4.5) for training in offline/online evaluation. The simulated datasets are necessary for boosting the performance of RL methods.
The compared bidding methods in our experiments include:
Keyword-level bidding (KB): KB bids based on keyword-level. It is a simple and efficient online approach adopted by search auction platforms such as the Alibaba’s search auction platform. We treat this algorithm as the fundamental baseline of the experiments.
RL with auction-level MDP (AMDP): AMDP optimizes sequence bidding decisions by auction-level DRL algorithm (Wang et al., 2017). As in (Wang et al., 2017), this algorithm samples an auction in every 100 auctions interval as the next state.
RL with robust MDP (RMDP): This is the algorithm we proposed in this paper. RMDP is single-agent oriented, without considering the competition between ads.
Massive-agent RL with robust MDP (M-RMDP): This is the algorithm extended from RMDP for handling the massive-agent problem.
To evaluate the performance of the algorithms, we use the evaluation metric
under the same constraint. It should be noted that, due to Alibaba’s business policy, we temporarily cannot expose the absolute performance values for the KB algorithm. Hence, we use relative improvement values with respect to KB instead. This do not affect performance comparison. We are applying for an official data exposure agreement currently.To facilitate the implementation of our method, we provide the settings of some key hyper-parameters in Table 1. It is worth to note that, for all agents we use the same hyper-parameter setting and DNN network structure.
The purpose of this experiment is to answer the following questions. (i) How does the DRL algorithm works for search auction data? (ii) Does RMDP outperform AMDP under changing environment? (iii) IS the multi-agent problem well handled by M-RMDP?
The performance comparison in terms of is presented in Table 2, where all the algorithms are compared under the same cost constraint. Thus, the performance in Table 2 actually depicts the capability of the bidding algorithm in obtaining more
under same cost, compared to KB. It shows that, on the test data RMDP outperforms KB and AMDP. However, if we compare the performance on the training dataset, the AMDP algorithm is the best since it models decision control at the auction level. Nevertheless, AMDP performs poorly on the test data, indicating serious overfitting to the training data. This result demonstrates that the auction sequences cannot ensure a consistent transition probability distribution between different days. In contrast, RMDP shows stable performance between training and test, which suggests that RMDP indeed captures consistent transition patterns under environment changing by hour-aggregation of auctions.
Besides, Table 2 also shows that for each of the 10 ads RMDP consistently learns a much better policy that KB. Since we use the same hyper-parameter setting and network structure, it indicates the performance of our method is not very sensitive to hyper-parameter settings.
Furthermore, the results showed in Table 2 illuminate us a general thought about reinforcement learning in the online advertising field. The power of reinforcement learning is due to sequential decision making. Normally, the more frequently the model can get feedback and adjust its control policy, the better the performance will be (AMDP on the training data). However, in the scenario of progressively changing environment, a frequent feedback information might contain too much stochastic noise. A promising solution for robust training is to collect statistical data with proper aggregation. Nevertheless, the drawback of this technique is sacrificing adjust frequency. Generally, a good approach of DRL for online advertising is actually the consequence of a good trade-off between feedback frequency and data trustworthiness (higher aggregation levels exhibit more consistent transition patterns and thus are more trustworthy).
ad_id | AMDP(train) | AMDP(test) | RMDP(train) | RMDP(test) |
---|---|---|---|---|
740053750 | 334.28% | 5.56% | 158.81% | 136.86% |
75694893 | 297.27% | -4.62% | 95.80% | 62.18% |
749798178 | 68.89% | 8.04% | 38.54% | 34.14% |
781346990 | 227.91% | -20.08% | 79.52% | 57.99% |
781625444 | 144.93% | -72.46% | 53.62% | 38.79% |
783136763 | 489.09% | 38.18% | 327.88% | 295.76% |
750569395 | 195.42% | -15.09% | 130.46% | 114.29% |
787215770 | 253.64% | -41.06% | 175.50% | 145.70% |
802779226 | 158.50% | -44.67% | 79.07% | 72.71% |
805113454 | 510.13% | -8.86% | 236.08% | 195.25% |
Avg. | 250.03% | -10.06% | 120.01% | 98.74% |
To investigate how DRL works with many agents competing with each other, we run KB, RMDP and M-RMDP on all the 1000 ads of the offline dataset. In this experiment and the following online experiments, we do not involve AMDP since it has already been shown to perform even worse than KB in the single-agent case.
The averaged results for the 1000 ads are shown in Figure 5. The first thing we can observe is that, the costs of all the algorithms are similar (note that the y-axis of Figure 5(a) measures the ratio to KB’s cost), while the results are different. It shows that RMDP still outperforms KB, but the relative improvement is not as high as in the single-agent case. Compared to RMDP, M-RMDP shows a prominent improvement in terms of . This consolidates our speculation that by equipping each agent with a cooperative objective, the performance could be enhanced.
This section presents the results of online evaluation in the real-world auction environment of the Alibaba search auction platform with a standard A/B testing configuration. As all the results are collected online, in addition to the key performance metric / , we also show results on metrics including conversion rate (CVR), return on investment (ROI) and cost per click (PPC). These metrics, though different, are related to our objective.
We continue to use the same 10 ads for evaluation. The detailed performance of RMDP for each ad is listed in Table 3. It can be observed from the last line that the performance of RMDP outperforms that of KB with an average of 35.04% improvement of . This is expected as the results are similar to those of the offline experiment. Although the percentages of improvement are different between online and offline cases, this is also intuitive since we use simulated results based on the test collection for evaluation in the offline case. This suggests our RMDP model is indeed robust when deployed to real-world auction platforms. Besides, we also take other related metrics as a reference. We find that there is an average of 23.7% improvement in CVR, an average of 21.38% improvement in ROI and a slight average improvement (5.16%) in PPC (the lower, the better). This means our model could also indirectly improve other correlated performance metrics that are usually considered in online advertising. The slight improvement in PPC means that we help advertisers save a little of his/her cost per click, although not prominent.
ad_id | CVR | ROI | PPC | |
---|---|---|---|---|
740053750 | 65.19% | 60.78% | 19.01% | -2.67% |
75694893 | 23.59% | 8.83% | 12.75% | -11.94% |
749798178 | 4.15% | -7.98% | -0.63% | -11.66% |
781346990 | 41.6% | 49.12% | 43.86 % | 5.33% |
781625444 | -9.79% | 30.95% | 7.73% | 14.28% |
783136763 | 55.853% | 27.03% | 52.87% | -18.49% |
750569395 | 2.854% | 1.65% | 19.60% | 8.81% |
787215770 | 21.52% | 32.97% | 46.94% | -8.61% |
802779226 | 31.44% | 46.93% | 19.97% | 11.78% |
805113454 | 57.08% | 78.64% | 68.73% | 13.74% |
Avg. | 35.04% | 23.11% | 21.38% | -5.16% |
A standard online A/B test on the 1000 ads is carried out on Feb. 5th, 2018. The averaged relative improvement results of RMDP and M-RMDP compared to KB are depicted in Table 4. We can see that, similar to the offline experiment, M-RMDP outperforms the online KB algorithm and RMDP in several aspects: i) higher , which is the objective of our model; ii) higher ROI and CVR, which are related key utilities that advertisers concern. It is again observed that PPC is slightly improved, which means that our model can slightly help advertisers save their cost per click. The performance improvement of RMDP is lower that that in the online single-agent case (Table 3). The reason could be that the competition among the ads affect its performance. In comparison, M-RMDP can well handle the multi-agent problem.
Algorithm | ROI | CVR | PPC | |
---|---|---|---|---|
RMDP | 6.29% | 26.51% | 3.12% | -3.36% |
M-RMDP | 13.01% | 39.12% | 12.62% | -0.74% |
We provide convergence analysis of the RMDP model by two example ads in Figure 6. Figures 6(a) and 6(c) show the (i.e. Eq. (13)) curves with the number of learning batches processed. Figures 6(b) and 6(d) present the (i.e. our optimization objective in Eq. (2)) curves accordingly. We observe that, in Figures 6(a) and 6(c) the starts as or quickly increases to a large value and then slowly converge to a much smaller value, while in the same batch range improves persistently and becomes stable (Figures 6(b) and 6(d)). This provides us a good evidence that our DQN algorithm has a solid capability to adjust from a random policy to an optimal solution. In our system, the random probability of exploration in Algorithm 1 was initially set to , and decays to 0.0 during the learning process. The curves in Figure 6 demonstrate a good convergence performance of RL.
Besides, we observe that the loss value converges to a relatively small value after about 150 million sample batches are processed, which suggests that the DQN algorithm is data expensive. It needs large amounts of data and episodes to find an optimal policy.
International Conference on Machine Learning
. 2829–2838.
Comments
There are no comments yet.