Learning Best Response Strategies for Agents in Ad Exchanges

02/10/2019 ∙ by Stavros Gerakaris, et al. ∙ 6

Ad exchanges are widely used in platforms for online display advertising. Autonomous agents operating in these exchanges must learn policies for interacting profitably with a diverse, continually changing, but unknown market. We consider this problem from the perspective of a publisher, strategically interacting with an advertiser through a posted price mechanism. The learning problem for this agent is made difficult by the fact that information is censored, i.e., the publisher knows if an impression is sold but no other quantitative information. We address this problem using the Harsanyi-Bellman Ad Hoc Coordination (HBA) algorithm, which conceptualises this interaction in terms of a Stochastic Bayesian Game and arrives at optimal actions by best responding with respect to probabilistic beliefs maintained over a candidate set of opponent behaviour profiles. We adapt and apply HBA to the censored information setting of ad exchanges. Also, addressing the case of stochastic opponents, we devise a strategy based on a Kaplan-Meier estimator for opponent modelling. We evaluate the proposed method using simulations wherein we show that HBA-KM achieves substantially better competitive ratio and lower variance of return than baselines, including a Q-learning agent and a UCB-based online learning agent, and comparable to the offline optimal algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Real-time ad exchanges (AdX) have become a common marketplace for online display advertising. These automated transactions take place numerous times a day, when a user visits a web page whose advertising inventory is managed by an AdX. The webpage, which is essentially the publisher, communicates a reserve price to the ad exchange for the impression, which consists of a description of the webpage, of the user and some other relevant content. The ad exchange offers the impression to the bidding agents, or advertisers, who compete for it in a second price auction with reserve price, managed by the AdX.

These automated transactions play an important role in the economics of the web, which has meant that advertisers routinely use automated methods to target these impressions to user profiles and characteristics that they are shown. The corresponding situation for publishers appears to be different. As argued in a report from Google [10], who run one such large exchange, publishers are lagging behind in being able to automate the setting of auction parameters such as reserve price. Furthermore, a nontrivial fraction of ad exchange auctions involve a single advertiser [5]. When only one advertiser is involved, the ad exchange auction mechanism becomes a posted price auction between the publisher and the advertiser.

In this work, we model this interaction and propose the application of novel learning algorithms to address the problem of adapting behaviour within the interaction. We examine the continuous interaction, over a number of rounds, between the advertiser and the publisher through the posted price auction mechanism. There are two key attributes associated with this posted price mechanism; (a) since there are only two agents involved, the observations the publisher makes from the advertiser’s bids are doubly censored (i.e., the publisher only learns if a bid is successful and does not gain further quantitative knowledge of the advertiser’s utilities) and (b) the publisher is facing an adaptive player with a number of possible strategies at his disposal.

Conceptually, the problem faced by the publisher is that of interacting in an ad hoc

manner, with limited prior knowledge of the opponent. Learning in such situations is made difficult by the fact that the open ended nature of the hypothesis space results in unacceptable complexity of learning. We propose that this problem may be addressed by drawing on recent developments in machine learning, which allow tractable learning despite the incompleteness of models. In particular, we use the Harsanyi-Bellman Ad Hoc Coordination (HBA) algorithm 

[1, 3], which conceptualises the interaction in terms of a space of ‘types’ (or opponent policies), over which the procedure maintains beliefs and uses the beliefs to guide optimal action selection. The attraction of this algorithm is that it can be shown to be optimal even when the hypothesised type space is not exactly correct but only approximates the possible behaviours of the opponent. In this paper, we adapt HBA to the situation where observations are censored, and demonstrate its usefulness in the AdX domain. In addition, addressing the case when opponents are playing essentially randomly (a situation where HBA’s belief update process would be inadequate), we propose the use of a Kaplan-Meier estimator to approximate the opponent’s stochastic behaviour to choose actions based on that estimate.

We model the interactions between the two agents as a Stochastic Bayesian Game of censored observations. The publisher’s goal is to maximise his expected revenue over the rounds of the game. In order to do so he needs to infer the bidding strategy of the advertiser. We define a space of behaviours for the advertiser, including various distributions and adaptive procedures, such as Q-learning and learn-then-bid strategies [12]

. So, a publisher using HBA maintains a belief about the advertiser’s behaviour, defined as a probability distribution over this space of types,

, and plays best response to it. It is worth noting that the offline optimal algorithm for the publisher, that serves as an upper bound on the expected revenue of our method, is the strategy that has prior knowledge of the buyer’s strategy (something that is unrealistic in practice, but illustrative for algorithm analysis) and plays optimally against it from the first turn of the game.

There is a substantial body of work in the AdX literature, which focuses on finding a publisher’s reserve price policy to optimise his revenue in second price auctions with reserve price [8, 15]

. However, these works, which mainly focus on the theory, often restrict attention to situations such as where an advertiser is only an unknown random distribution (hence, not adaptive), and where the publisher has access to uncensored samples. From a practitioner’s perspective, it is interesting to ask if we can go beyond some of these assumptions and devise learning algorithms for the publisher that only uses the censored observations available online (hence making it robust with respect to model mismatch), and allow for more generality in the behaviour of the advertiser, specifically allowing that agent to adapt (which is very realistic in the scenarios we mentioned earlier). In our experiments, we show that the proposed procedure is able to adapt better than baselines such as Q-learning or a UCB-based online learning procedure, and that it approaches the offline optimal benchmark in many situations. In order to understand the behaviour of this algorithm under model mismatch, we also present experiments with an adaptive adversary which is a neural network, looking both at the transient behaviour of HBA when the adversary is actively learning and is non-stationary, and also the case where the adversary is a mixture that is different from any individual element in the type space over which HBA maintains beliefs.

2 Related Work

Much has been written about ad display and sponsored search auctions, for each participating agent of the auction, either as publisher or advertiser. A key paper in the area of ad exchanges is that of Muthukrishnan [16], who laid out several research directions in this domain, informed by exposure to the practice in this domain.

More specific related work, viewing the problem from the perspective of the publisher, are the following. Mohri and Medina [15]

discuss selecting the reserve price to optimise revenue in a second price auction with reserve price. They consider a supervised learning setting and assume access to a training sample of uncensored historical data. A similar formulation of revenue is seen in the work of Cesa-Bianchi et al. 

[8], where the authors assume no historical data, but they get direct observations based on the assumption that every bidder in this market draws his valuation from the same fixed and unknown distribution . Then they proceed by showing a regret minimisation algorithm, achieving sublinear regret. In other recent work, Amin et al. [5] define the notion of the strategic regret and present no-regret algorithms with respect to that notion. Finally, Huang et al. [13] study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution and prove tight sample complexity bounds for regular distributions, bounded-support distributions, and a wide class of irregular distributions. This work is preceded by Cole and Roughgarden [9], who also analyse sample complexity of revenue maximisation, this time as a function of the number of bidders in an auction.

From the advertiser’s perspective, Amin et al. [4]

study budget optimisation for sponsored search auctions. The authors cast the problem of budget optimisation as a Markov Decision Process (MDP) with censored observations and propose a learning algorithm based on Kaplan-Meier estimators. The authors also perform a large scale empirical demonstration on auction data from Microsoft, in order to demonstrate that their algorithm is extremely effective in practice. Another take on the advertiser’s optimisation problem is the one by Ghosh et al. 

[12]

who study the design of a bidding agent in a marketplace for displaying ads. They provide algorithms and performance guarantees for both settings, while also experimentally evaluating their performance on a fitted log-normal distribution from data observed from the Right Media Exchange.

Another literature that is closely relevant pertains to learning to interact in multiagent domains, with limited or no prior coordination. Related work includes [1, 7, 19]. A key concept arising from this literature is that of modelling the opponent in terms of a hypothesis space of policies, that in a certain sense approximate the space from which that agent herself chooses the true policy.

3 Model for the Publisher in an Ad Exchange

We start by presenting our model of how we conceptualise this interaction with the (model of an) advertiser, agent , as a Stochastic Bayesian Game. The advertiser is characterised by a discrete state space , defined by his own budget and the auction round , . He has a set of actions

which are the possible prices he can bid for an impression (his valuation vector) and his strategy is chosen from a well-defined type space

. A payoff function maps his state, type and actions to specific payoff and a strategy defines a probability vector over his possible actions. The history vector contains all histories .

We realise the interaction between the advertiser and the publisher as a Stochastic Bayesian Game of censored information between two players. The game consists of:

  • An advertiser and a publisher

  • State space , action space , type space

  • Transition function

  • starts at time and state . At each time step :

    1. An impression arrives

    2. The advertiser chooses his action (bid) with probability

    3. The publisher chooses his action (reserve price) with probability

    4. The game transitions into state with probability

    5. If the impression is sold at price ; otherwise the impression doesn’t get sold

    6. The immediate payoff of the advertiser is

    7. The immediate payoff of the publisher is

  • The process is repeated until a terminal state is reached

In this problem setting, the publisher does not have knowledge of the advertiser’s individual strategy , only of his type space which is the set of all of his possible strategies. He needs to infer that strategy from the censored observations he makes at each auction round in order to play his best response strategy against it. As mentioned earlier, we utilise the Harsanyi-Bellman Ad Hoc Coordination Algorithm [1, 3]. HBA, and adapt it to the setting of AdX. The main steps of this procedure are as described in algorithm 1. A key adaptation from the formulation in [1, 3] is the incorporation of an estimate of the opponent’s actions and to allow for the KM estimator (to be explained in more detail in the next section) for the case of a randomised adversary.

We make the following assumptions for our setting:

Assumption 1

We control the publisher and choose his strategy . has a single type known to us.

Assumption 2

Given a stochastic game we assume all the elements of , except of the type of the opponent, is common knowledge.

Assumption 3

We only have partial observability of states and actions.

1:SBG , player , user defined type spaces , history , discount factor
2:Action probabilities
3:for all  do
4:     Compute
5:end for
6:for all  do
7:     Compute expected payoff as follows:
8:end for
9:if  then
10:     , where
11:else
12:     Distribute uniformly over
13:end if
Algorithm 1 HBA Censored

In the HBA algorithm, which maintains a posterior probability of an agent being a specific type based on observing the history of actions, the action is selected by determining a best response, within the game

, which implicitly uses in the value calculations Q-values based on the Bellman optimality principle.

The posterior probability of an agent being a specific type is calculated with the use of sum posterior, defined in [2] as:

By the term censored observations we refer to the type of the information perceived by the publisher. As is the case in posted price auctions, the publisher only gets to observe the outcome of any round of a sequential auction, which is if he sold the impression or not, but he doesn’t get to observe the bid that actually won; he only knows that this bid is greater or equal than the reserve price he specified (). Otherwise, he knows that this bid was strictly less than his reserve price ().

3.1 HBA Types (Advertiser’s Strategies)

In this section, we define the hypothesised type space of the advertiser. The first two strategies don’t involve an adaptive component, so they are in a sense naive. However, there are works [17] that suggest that this kind of bidding can often be found in real world auctions. The rest of the strategies of the advertiser that we specify, are well studied learning models that involve distinct learning and strategy components. This set is chosen to capture the diversity of potential types of behaviour of the unknown adversary.

We also present the best response strategies of the publisher to each of the respective strategies of the advertiser, under the assumption that all the private information of the advertiser is known by the publisher. These best response strategies consist a set of offline optimal benchmarks, that will serve as an upper bound on the revenue of our method, which assumes no private knowledge other than the type space of the advertiser.

Greedy Strategy

Advertiser’s greedy policy, given his action space , is to always to bid his maximum value for the impression.

One can see that publisher’s best response policy is to simply match his maximum value and offer it as a reserve price in every turn of the game.

Random Strategy

In the second strategy, the advertiser places a bid i.i.d. from a fixed distribution over his value vector. We use several random distributions, such as the uniform, the normal, the logistic, the log-normal and the exponential.

The best response strategy against a random advertiser, with the distribution over known by the publisher, is the reserve price that maximises the publisher’s expected revenue.

where denotes the tail probability of the distribution for the value .

Learn Then Bid Strategy

The next adaptive bidding strategy of the advertiser is given in the work of Ghosh et al. [12], where the advertiser chooses to opt out for a specified number of out of rounds in order to observe the prices of the reserve and then, based on his observations, decides between the price that guarantees, in expectation, the target fraction of impressions he sets, and the price satisfying the maximum amount he wants to spend.

The best response strategy against an advertiser playing Learn Then Bid strategy, with the parameters of the Learn Then Bid algorithm known by the publisher, is the following.

  1. Find the maximum value of the price that satisfies the target of his campaign reach, times the probability of him playing that price.

  2. Find the maximum value of the price that satisfies the advertiser’s target spent.

is the estimated distribution of the market from the advertiser after the learning phase. The optimal policy for the publisher is to exhaust the advertiser’s budget, by deterministically selecting the maximum of those two prices.

Multi Armed Bandits Strategy

Another strategy we use for the advertiser is the well known UCB algorithm [6]. We implement it using a -greedy action selection policy. Publisher’s optimal policy is to offer the maximum value, as a reserve price in every turn of the game.

It is not hard to see that any traditional no-regret strategy is easily manipulable, therefore inadequate for this interactive problem setting, something also highlighted in [5].

Q-learning Strategy

The last algorithm in the type space of the advertiser is the well known Q-learning algorithm [20]. We implement it using a soft-max action selection policy. The states for the advertiser are the different levels of his remaining budget and the action space is defined by his value vector.

Publisher’s optimal policy, similar to when he faces a random distribution, is finding the reserve price that maximises his expected revenue w.r.t. the Boltzmann distribution produced by the Q-values, which dictates the soft-max action selection.

where,

3.2 HBA Beliefs and Best Responses

Over the recently introduced types, HBA maintains and updates beliefs, that will determine action selection at each step. In step 4 of Algorithm 1, HBA keeps a posterior belief over types, by keeping track of the sequence of actions of the opposing player and calculating the probability that these actions come from a specific type. Afterwards, in step 7, it computes the Q-values of every possible action at this state and, following this, it calculates the expected revenue of every action based on the posterior over types it maintains and on the Q-values it has just computed. In the final step 9, depending on whether it recognises a stochastic opponent, , or a deterministic one, HBA decides between calling a procedure designed specifically for random opponents (discussed in Section 3.4) and playing a single price as a best response.

By exchanging between iterations of these two procedures, the posterior belief calculation and the Q-values computation followed by a single expectation maximisation step, HBA succeeds in modelling in a dynamic fashion the opposing agent, while also plays optimally, in expectation, against her at each step.

3.3 HBA Censored

As mentioned earlier, accommodating censored observations requires estimating actions that can be used for belief updating. There are two specific values that are needed for such an estimate. The first one concerns the probability of the last observed action, conditional to a player being of a specific type. This probability is used to calculate the posterior of the opposing agent’s type, according to the Bayes rule. In our setting, where the observations are censored, we estimate this value by using structural characteristics of the distribution he plays. For instance, let be the distribution associated with the Q-values for a Q-Learning advertiser. If the publisher sells the impression at price , he doesn’t observe the bid, but he can update the probability, conditional on his opponent’s type:

The second one concerns the computation of the HBA’s own Q-values in the expectation maximisation step. Recall from Algorithm 1 that we compute the Q-values by computing every possible outcome in expectation,

where denotes the utility of the previous step:

Again here we utilise the tail probability of the distribution in order to estimate the required Q-values from the censored observations.

With this, we achieve performance close to the offline optimal metric, against advertising strategies that play a single price or that are converging to a price, i.e. Greedy, Learn Then Bid or UCB.

From the defined type space we see cases where the output of our opponent’s algorithm is randomised, either according to a fixed random distribution or a dynamic one, in the case of the Q-learning algorithm. It is known from the theory underpinning HBA [2] that this case requires different treatment.

3.4 KM Estimator for Stochastic Opponents

We now present an approach based on the Kaplan-Meier estimator [14], for estimating distributions from censored samples. When HBA recognises a randomised opponent, we let this algorithm decide both the query values and the optimal reserve price. KM estimator is a powerful tool for approximating distributions based on censored samples and has found use in both e-commerce [4] and financial applications [11].

The Random KM algorithm uses a few simple ideas from random sampling and the Kaplan-Meier estimator. We start by scanning the support of the distribution for potential candidate values and for every candidate we query for a sufficient number of times, in order to have a good estimation.

1:Distribution to make CDF queries
2:Optimal reserve price
3:procedure Random KM()
4:     for  do
5:         Set the reserve price uniformly at random.
6:         if  then
7:               such that
8:         else
9:               such that
10:         end if
11:     end for
12:     
13:     Compute
14:     Create the list
15:     for all  do
16:         Set the reserve price for steps.
17:         Keep counter
18:         Update:
19:     end for
20:     
21:end procedure
Algorithm 2 Random Querying - KM

Essentially the Random KM algorithm works in two steps. In the first step, it queries for steps over all the support of the opponent’s possible values and makes a loose estimation of each value’s tail probability by calculating the fraction of right censored observations (, times the impression gets sold and the value is less or equal than the reserve price), to the sum of right and left censored observations.

In the second step it isolates the candidate values which are the most probable to generate the most revenue, and queries each of them for a number of steps, which Kaplan-Meier dictates, in order to get a precise approximation of their tail probabilities. Using the estimated tail probabilities, it calculates the price that maximises the expected payoff and returns it.

In Figure 1 we can see how these two steps are implemented in practice. The green area denotes the estimation of the revenue function, denoted by the blue area, during the first step (random querying). Similarly, the red area denotes the approximation of the expected revenue after implementing the KM estimator on the second step, for a selected number of candidate values. The precise approximation KM provides us, allows for optimal, in expectation, action selection against stochastic opponents.

(a) Uniform
(b) Normal
(c) Exponential
(d) Log-normal
Figure 1: Random KM estimation of the empirical payoff functions of various random distributions. The loose estimation is the random querying step (in green), followed by the precise KM step (in red).

4 Experimental Results

4.1 Agents in the type space

The AdX domain is of significant commercial importance. However, this also means that obtaining real world data from live auctions is difficult. We conduct empirical studies using a domain that captures many aspects of this domain. The domain is that of the Trading Agents Competition (TAC) AdX ’15 [18], which simulate an Ad Exchange game. We use our own implementation of these specifications. In particular we have a setting of days, with impression opportunities each day, specified daily Budget and Campaign Reach for the advertiser and defined advertiser’s type space: [Random, Greedy, LTB, UCB, Q-learn]

We use three basic benchmarks for the evaluation of our HBA-KM algorithm.

  1. The Offline Optimal algorithm that knows the true type of the advertiser a priori and decides the optimal policy as his best response.

  2. The Q-learning algorithm, a well known reinforcement learning technique.

  3. The UCB algorithm, a MAB technique.

The Offline Optimal algorithm realises the best response strategies, discussed in Section 3.1, and serves as an upper bound for our method. We also choose a Q-learning agent and a UCB-based online learning agent as our baselines, since, given the stochastic game formulation of the problem, one may hope to solve it using techniques from reinforcement learning. Q-learning with soft-max, and UCB with -greedy action selection, are two of the simplest algorithms for reinforcement learning, giving good results in a wide spectrum of applications.

We use different parameters for each of the strategies in the advertiser’s type space. For the comparative evaluation of our algorithm, against the specified benchmarks, we consider metrics, such as the revenue of our algorithm and the standard deviation to quantify our agent’s payoff variation between consecutive games.

The parameter settings for our experiments were chosen for the opposing agents, in a way to demonstrate that our results hold, for every way one could distribute the probability mass over the value vector for the impressions (), according to a specific random distribution or an adaptive strategy. The exact parameters that were used for all the simulations follow.

  1. For the randomised strategies:

    for the uniform distribution, with

    and . for the normal distribution, with and . for the log-normal distribution, with and .

    for the exponential distribution, with

    .

  2. For the deterministic and adaptive strategies: For the Greedy strategy, we set the bid to be . For the Learn-Then-Bid strategy, we set the exploration length to be and the target fraction . For the UCB strategy, we set the exploration step and the exploitation with probability , where . For the Q-learn strategy we set the learning rate to be , the discount factor and the temperature of the soft-max selection policy

For every opposing strategy, we consider the cartesian product of the parameters we specified and the results that follow are averaged over every possible run using these parameters, across 100 simulations for each individual opponent.

In Figure 2, we can see the HBA-KM’s performance against the deterministic and adaptive strategies of the advertiser. The performance is close to the offline optimal benchmark and outperforms the other two baselines.

(a) Greedy strategy
(b) Learn Then Bid strategy
(c) UCB strategy
(d) Q-learn strategy
Figure 2: Revenue comparison and one standard deviation against the adaptive strategies.

In Figure 3, we can see the HBA-KM’s performance against the randomised strategies of the advertiser. The performance of HBA-KM approximates the optimal offline benchmark, on every single occasion based on the distribution approximation that the subroutine Random KM performs.

(a) Uniform distribution
(b) Normal distribution
(c) Log-normal distribution
(d) Exponential distribution
Figure 3: Revenue comparison and standard deviation against random strategies.

As another metric we consider the average competitive ratio of each algorithm, when compared to the online optimal one. The online optimal algorithm is the one with the best case cost; imagine that the publisher knows a priori the sequence of bids the advertiser is going to play. Then he attains the online optimal policy by greedily selecting to sell the impression at the highest possible cost, up until the budget of the advertiser is exhausted. So for a bid and budget , we have:

The competitive ratio is defined as and Table 1 summarises the results over all the opposing strategies, adaptive and randomised respectively. The significant drop on the competitive ratio of every benchmark we witness when we move from facing adaptive strategies to randomised is expected, as the optimal online algorithm has knowledge of the exact sequence of bids, something powerful against truly random opponents.

Against Adaptive Against Randomised
Algorithm Name Competitive Ratio Std Competitive Ratio Std
Offline-Opt 0.9721 0.0064 0.7657 0.1009
HBA-KM 0.9245 0.2073 0.7434 0.1585
Q-Learn 0.7976 0.1650 0.6165 0.2673
UCB 0.7004 0.2334 0.6218 0.1751
Table 1: Average Competitive Ratio, across all strategies and simulations

4.2 Neural Network agent

So far our experiments only included opposing agents that are in the hypothesised type space of our own algorithm. Unfortunately, this is not always the case in real life scenarios, as an agent in this marketplace should be able to face opposing strategies he cannot expect, or model explicitly, in real time. This is the question we sought to answer in the second part of our experiments; what happens when such an agent enters the market and is our algorithm still competitive against him?

We choose a Neural Network agent as our unknown opponent for two reasons. The first reason is that a NN does not belong to the hypothesised type space of our own agent, therefore our type space should be descriptive enough to be able to model adequately such unknown agents on the fly. The second reason, consistent to the second part of our experiments, where we use a Neural Network trained in a mixture of the opposing publishers, is that we want to capture the inherent heterogeneity an agent faces in this market, where his opponents are trained against a variety of pricing algorithms. Here we simulate the Neural Network with up to 4 Hidden Layers and train it at each arriving impression.

Our exact parameters for the simulation follow: Two input layers, the bid of the advertiser and the reserve price of the publisher. 1 up to 4 Hidden Layers. One output layer, the bidder’s immediate payoff. Each node is fully-connected with nodes of next layer and we use a standard sigmoidal threshold function. We train online for every instance of the first day of simulations and for 100 impressions for each subsequent day. The network is trained until convergence at the end of each simulation day. The optimisation step is using a Hill Climber approach.

4.2.1 NN trained on a single opposing agent

In the first set of experiments, we train the neural network using samples from his current opponent. The two input layers of the network are the bid and the reserve price for each impression that arrives and the single output is the revenue of the advertiser. We run the simulations using a neural network with 1 up to 4 hidden layers. Table 2 summarises the competitive ratio of each algorithm against this agent.

Algorithm Name Competitive Ratio Std
HBA-KM 0.9423 0.2697
Q-Learn 0.8699 0.3470
UCB 0.8469 0.4119
Table 2: Average Competitive Ratio against a NN agent, across all simulations

4.2.2 NN trained on a mixture of opposing agents

In the second set of experiments, we train the neural network in a mixture of the opposing publishing agents. Specifically, for subsequent chunks of 100 impressions, the NN agent is trained with samples from the HBA-KM, the UCB and the Q-Learn respectively, throughout the first day of the simulations (1000 impressions). The reasoning behind this type of training is that in real world auctions we should expect to face algorithms trained in a variety of scenarios and, as such, we will not be able to model these agents explicitly. Table 3 shows that against this opposing network, our algorithm stays highly competitive, even compared to the online optimal benchmark through the competitive ratio.

Algorithm Name Competitive Ratio Std
HBA-KM 0.9501 0.2222
Q-Learn 0.7957 0.4336
UCB 0.8630 0.3897
Table 3: Average Competitive Ratio against a mixed Neural Network agent, across all simulations

5 Discussion and Conclusions

In this paper, we address the learning problem faced by the publisher in an ad exchange, an interaction that is both practically significant and scientifically challenging. We propose the use of a novel methodology for learning in multiagent interactions, showing how this enables us to achieve substantial empirical improvements in simulations involving the TAC AdX domain.

Although we have not performed the theoretical analysis of these extensions, we conjecture that HBA-KM best response actions will always converge to an approximately optimal policy against either stochastic or deterministic opponents, within this posted price auction mechanism. This is based on the observation that the challenge is twofold, with each individual piece having known properties. The Random KM estimator can approximate successfully any given distribution, or a specific family of random distributions, while HBA converges to the correct beliefs over his hypothesised type space .

A useful future direction would be to expand this to the case where there are multiple advertisers and a publisher interacting with each other in the ad exchange market. The main question here becomes whether there is a way (a) to model explicitly every single one of your opponents, or (b) to model the market price, i.e. the price that an agent faces in each step of the auction and is derived from the joint actions of every other agent in the auction . An algorithm that answers successfully either of those questions, would come a long way to us understanding the implicit interactions between different agent types, and will probably have other implications in situations where the modelling of your opponent, or teammate, on the fly is the core of the problem, such as the ad hoc teams challenge [19].

References

  • [1] Albrecht, S.V., Ramamoorthy, S.: A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems. In: Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems. pp. 1155–1156. IFAAMAS (2013)
  • [2]

    Albrecht, S.V., Ramamoorthy, S.: On convergence and optimality of best-response learning with policy types in multiagent systems. In: Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence. pp. 12–21 (2014)

  • [3] Albrecht, S., Crandall, J., Ramamoorthy, S.: Belief and truth in hypothesised behaviours. Artificial Intelligence 235, 63–94 (2016), dOI: 10.1016/j.artint.2016.02.004
  • [4] Amin, K., Kearns, M., Key, P., Schwaighofer, A.: Budget optimization for sponsored search: Censored learning in mdps. In: Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence. pp. 54–63. AUAI Press (2012)
  • [5] Amin, K., Rostamizadeh, A., Syed, U.: Learning prices for repeated auctions with strategic buyers. In: Advances in Neural Information Processing Systems. pp. 1169–1177 (2013)
  • [6] Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine Learning 47(2-3), 235–256 (2002)
  • [7] Barrett, S., Stone, P., Kraus, S.: Empirical evaluation of ad hoc teamwork in the pursuit domain. In: Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems. pp. 567–574. IFAAMAS (2011)
  • [8] Cesa-Bianchi, N., Gentile, C., Mansour, Y.: Regret minimization for reserve prices in second-price auctions. In: Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 1190–1204. SIAM (2013)
  • [9]

    Cole, R., Roughgarden, T.: The sample complexity of revenue maximization. In: Proceedings of the 46th Annual ACM Symposium on Theory of Computing. pp. 243–252. ACM (2014)

  • [10] Insights from buyers and sellers on the rtb opportunity. Forrester Consulting, commissioned by Google, White Paper (2011)
  • [11] Ganchev, K., Nevmyvaka, Y., Kearns, M., Vaughan, J.W.: Censored exploration and the dark pool problem. In: Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence. pp. 185–194 (2009)
  • [12] Ghosh, A., Rubinstein, B.I., Vassilvitskii, S., Zinkevich, M.: Adaptive bidding for display advertising. In: Proceedings of the 18th International Conference on World Wide Web. pp. 251–260. ACM (2009)
  • [13] Huang, Z., Mansour, Y., Roughgarden, T.: Making the most of your samples. In: Proceedings of the 16th ACM conference on Economics and Computation. pp. 45–60. ACM (2015)
  • [14] Kaplan, E.L., Meier, P.: Nonparametric estimation from incomplete observations. Journal of the American Statistical Association pp. 457–481 (1958)
  • [15] Mohri, M., Medina, A.M.: Learning theory and algorithms for revenue optimization in second-price auctions with reserve. In: Proceedings of the 31st International Conference on Machine Learning. pp. 262–270 (2014)
  • [16] Muthukrishnan, S.: Ad exchanges: Research issues. In: Internet and network economics, pp. 1–12. Springer (2009)
  • [17] Pin, F., Key, P.: Stochastic variability in sponsored search auctions: observations and models. In: Proceedings of the 12th ACM conference on Electronic Commerce. pp. 61–70. ACM (2011)
  • [18] Schain, M., Mansour, Y.: Ad exchange–proposal for a new trading agent competition game. In: Agent-Mediated Electronic Commerce. Designing Trading Strategies and Mechanisms for Electronic Markets, pp. 133–145. Springer (2013)
  • [19] Stone, P., Kaminka, G.A., Kraus, S., Rosenschein, J.S., et al.: Ad hoc autonomous agent teams: Collaboration without pre-coordination. In: AAAI (2010)
  • [20] Watkins, C.J., Dayan, P.: Q-learning. Machine Learning 8(3-4), 279–292 (1992)