Optimization of a SSP's Header Bidding Strategy using Thompson Sampling

07/09/2018 ∙ by Grégoire Jauvion, et al. ∙ AlephD Université de Toulouse 0

Over the last decade, digital media (web or app publishers) generalized the use of real time ad auctions to sell their ad spaces. Multiple auction platforms, also called Supply-Side Platforms (SSP), were created. Because of this multiplicity, publishers started to create competition between SSPs. In this setting, there are two successive auctions: a second price auction in each SSP and a secondary, first price auction, called header bidding auction, between SSPs.In this paper, we consider an SSP competing with other SSPs for ad spaces. The SSP acts as an intermediary between an advertiser wanting to buy ad spaces and a web publisher wanting to sell its ad spaces, and needs to define a bidding strategy to be able to deliver to the advertisers as many ads as possible while spending as little as possible. The revenue optimization of this SSP can be written as a contextual bandit problem, where the context consists of the information available about the ad opportunity, such as properties of the internet user or of the ad placement.Using classical multi-armed bandit strategies (such as the original versions of UCB and EXP3) is inefficient in this setting and yields a low convergence speed, as the arms are very correlated. In this paper we design and experiment a version of the Thompson Sampling algorithm that easily takes this correlation into account. We combine this bayesian algorithm with a particle filter, which permits to handle non-stationarity by sequentially estimating the distribution of the highest bid to beat in order to win an auction. We apply this methodology on two real auction datasets, and show that it significantly outperforms more classical approaches.The strategy defined in this paper is being developed to be deployed on thousands of publishers worldwide.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Real-Time Bidding (RTB) is a mechanism widely used by web publishers to sell their ad inventory through auctions happening in real time. Generally, a publisher sells its inventory through different Supply-Side Platforms (SSPs), which are intermediaries who enable advertisers to bid for ad spaces. A SSP generally runs its own auction between advertisers, and submits the result of the auction to the publisher.

There are several ways for the publisher to interact with multiple SSPs. In the typical ad selling mechanism without header bidding, called the waterfall mechanism, the SSPs sit at different priorities and are configured at different floor prices (typically the higher the priority, the higher the floor price). The ad space is sold to the SSP with the highest priority who bids a price greater than its floor price.

With header bidding, all the SSPs are called simultaneously thanks to a piece of code running in the header of the web page. Then, they compete in a first-price auction which is called the header bidding auction thereafter. In this mechanism, a SSP with a lower priority can purchase the ad if it pays more than a SSP with a higher priority. Consequently, a RTB market with header bidding is more efficient than the waterfall mechanism for the publisher.

In this paper, we take the viewpoint of a single SSP buying inventory in a RTB market with header bidding. Based on the result of the auction it runs internally, it submits a bid in the header bidding auction and competes with the other SSPs. This ad-selling process is summarized on Figure 1. When it wins the header bidding auction, the SSP is paid by the advertiser displaying its ad, and pays to the publisher the closing price of the header bidding auction. We study the problem of sequentially optimizing the SSP’s bids in order to maximize its revenue. Quite importantly we consider a censored setting where the SSP only observes if it has won or lost once the header bidding auction has occurred. The bids of the other SSPs are not observed.

Figure 1. Ad-selling process

Typically some digital content will call a Supply-Side Platform (SSP) when loading, that will itself call many advertisers for bids in a real-time auction. To increase competition, some publishers have been calling several SSPs to introduce some competition among them. This setting is called header bidding because the competition among SSPs has typically been happening on the client side, in the header of HTML pages. Header bidding is, in practice, a two-staged process, with several second-price auctions happening in various SSPs, the response of which are aggregated in a final first-price auction. A SSP may be willing to adjust the bid it is responding to adapt to the first price auction context. This can be seen as an adaptative fee.

This optimization problem is formalized as a stochastic contextual bandit problem. The context is formed by the information available before the header bidding auction happens, including the result of the SSP internal auction. In each context, the highest bid among the other SSPs in the header bidding auction is modeled with a random variable and is updated in a bayesian fashion using a particle filter. Therefore, the reward (i.e., the revenue of the SSP) is stochastic. We design and experiment a version of the Thompson sampling algorithm in order to optimize bids in a sequence of auctions.

The paper is organized as follows. We discuss earlier works in Section 2. In Section 3 we formalize the optimization problem as a stochastic contextual bandit problem. We then describe our version of the Thompson sampling algorithm in Section 4. Finally, in Section 5, we present our experimental results on two real RTB auction datasets, and show that our method outperforms two more traditional bandit algorithms.

2. Related work

(Vidakovic, 2017) provides a very clear introduction to the header bidding technology, and how it modifies the ad selling process in a RTB market.

Bid optimization has been much studied in the advertising literature. A lot of papers study the problem of optimizing an advertiser bidding strategy in a RTB market, where the advertiser wants to maximize the number of ads it buys as well as a set of performance goals while keeping its spend below a certain budget (see (Wang et al., 2017) and references therein). (Fernandez-Tapia et al., 2015) and (Lee et al., 2013) state the problem as a control problem and derive methods to optimize the bid online. In (Zhang et al., 2014), the authors define a functional form for the bid (as a function of the impression characteristics) and write the problem as a constrained optimization problem.

The setting of an intermediary buying a good in an auction and selling it to other buyers (which is what the SSP does in our setting) has been widely studied in the auction theory literature. In (Myerson and Satterthwaite, 1983), the author uses the tools developed in (Myerson, 1981) (one of the most well-known papers in auction theory) to derive an optimal auction design in this setting, and (Feldman et al., 2010) and (Deng et al., 2014) analyze how the intermediary should behave to maximize its revenue.

(Gomes and Mirrokni, 2014) studies the optimal mechanism a SSP should employ for selling ads and analyzes optimal selling strategies. (Qin et al., 2017) analyzes the optimal behaviour of a SSP in a market with header bidding, and validates the approach on randomly generated auctions.

From an algorithmic perspective, the Thompson sampling algorithm was introduced in (Thompson, 1933). The papers (Kaufmann et al., 2012; Korda et al., 2013) studied its theoretical guarantees in parametric environments, while (Leike et al., 2016) studied it in non-parametric environments. Besides, a very clear overview of the particle filtering approach to update the posterior distribution is given in (Doucet et al., 2001; Murphy, 2012).

Bandit algorithms were already designed and studied for repeated auctions, including RTB auctions. For instance, in repeated second-price auctions, (Weed et al., 2016) construct a bandit algorithm to optimize a given bidder’s revenue, while (Cesa-Bianchi et al., 2015) design a bandit algorithm to optimize the seller’s reserve price.

In a setting very similar to ours, (Heidari et al., 2016) study the situation where a given SSP competes with other SSPs in order to buy an ad space. They design an algorithm that provably enables the SSP to win most of the auctions while only paying a little more than the expected highest price of the other SSPs. Though the problem seems similar, our objective is different: we want the SSP to maximize its revenue, and not necessarily to win most auctions with a small extra-payment. In particular we cannot neglect the closing price of the SSP’s internal auction in the optimization process.

We finally mention the work of (Kleinberg and Leighton, 2003) for the online posted-price auction: for each good in a sequence of identical goods, a seller chooses and announces a price to a new buyer, who buys the good provided the price does not exceed their private valuation (see also (Mohri and Medina, 2014, 2015)

when the seller faces strategic buyers). Though their problem is different, the shape of their reward function is very similar to ours. The authors show that the classical UCB1 and Exp3 bandit algorithms applied to discretized prices are worst-case optimal under several assumptions on the sequence of the buyers’ valuations. In our paper we do not tackle the worst case and instead use prior knowledge on the ad auction datasets (i.e., an empirically-validated parametric model) to better optimize the SSP’s revenue.

3. Problem statement

3.1. RTB market

We represent the RTB market as an infinite sequence of time-ordered impressions happening at times . We note the sequence of impressions happening before time (including ).

Impression happening at time is characterized by a context , which summarizes all the information relative to impression that is available before the header bidding auction starts. It may contain the ad placement (where it is located on the web page), some properties of the internet user (for example its operating system), or the time of the day. An important variable of the context which is specific to our setting is the closing price of the SSP internal auction, which is known before the header bidding auction happens.

We assume that the context is categorical with a finite number of categories . A continuous variable can be discretized to meet this assumption. Without loss of generality, we assume that the categories are . We note the subsequence of containing all impressions such that .

3.2. Ad selling process with header bidding

We assume that SSPs: compete in the header bidding auctions (possibly bidding if they are not interested in purchasing the ad). We note the bid of in the header bidding auction for impression . As the header bidding auction is a first-price auction, its closing price is .

From now on, we consider the problem from standpoint. We note the bid submitted by in the header bidding auction, which is the variable to optimize. We also write , which is the highest bid among the other SSPs.

In each impression , we assume that runs an internal auction between advertisers, whose closing price is denoted . is the amount paid by the advertiser winning the internal auction to should win the header bidding auction. Note that we do not need to know the detailed internal auction mechanism but only its closing price.

Before header-bidding, a SSP would run a second-price auction with an advertiser bidding $10 and closing at . Then the SSP would respond to the publisher. In this context, the advertiser pays , the publisher receives and the SSP gets its fees: . In a header bidding context, the SSP is in competition with other SSPs in a first price auction, it may lose an opportunity by taking too much fees or pay too much if it is sure to win and take too little fees.

3.3. Revenue function for the SSP

The revenue function of at impression can be written as . Indeed:

  • When , wins the header bidding auction. It is paid by the advertiser winning the internal auction, and it pays to the publisher.

  • Otherwise, does not display any ad and gets no revenue in the auction.

In Figure 2 we plot ’s revenue as a function of its bid , for two sets of values for the closing price of the internal auction and the highest bid among the other SSPs.

Figure 2. revenue as a function of its bid

Note that in the setting described here, we ignore some factors having an impact on ’s revenue. Indeed may charge a fee to the advertiser in addition to the closing price of the internal auction it runs. Also, the cost of running the internal auction may lower ’s revenue. These factors would impact the revenue function but the strategy described in this paper would remain applicable.

3.4. Optimization problem statement

Before the header bidding auction for impression happens, the value of the highest bid among the other SSPs is unknown and is modeled with the random variable . Thus, the revenue optimization problem over impressions can be expressed as follows:

(1)

where the maximum is over the prices that the SSP can choose as a function of the past observations.

It would be tempting to model the variables as independent and identically distributed within any context , with an unknown distribution . Under this assumption, the task presented here boils down to a contextual stochastic bandit problem. A closer look at the data, however, shows that there are significant non-stationarities in time. We explain below that our final model does address this issue, by the use of a particle filter within the bandit algorithm.

We emphasize that, after the header bidding auction for impression has occurred, does not observe the value of the bid , but only observes if it has won or lost the header bidding auction, i.e., . This censorship issue must be tackled in the optimization methodology.

4. Revenue optimization using Thompson sampling

In this section we present a method to sequentially optimize the bids . It combines the Thompson sampling algorithm with a parametric model for the distributions (recall that is the distribution of the other SSPs’ highest bids within context ). Note that all the contexts are modeled independently.

4.1. Parametric estimation of

We introduce a family of distributions parametrized with , and we note the corresponding cumulative density functions. For each context , we assume that the distribution of the other SSPs’ highest bids belongs to the family ; let be such that .

According to the Thompson sampling method, we fix a prior distribution over . Then, for all , we consider the posterior distribution given all the observations available at the end of the -th auction, i.e., the censored observations for .

The Bayes rule yields the following expression for :

(2)

4.2. Overview of the methodology

In our model the Thompson sampling algorithm unfolds as follows: before any impression ,

  • Sample a value from the posterior distribution ;

  • Compute the bid that would maximize the SSP’s expected revenue if (see below);

  • Observe the auction outcome and update the posterior .

As the particle filter provides a discrete approximation of the posterior distribution, the sampling step is straightforward. The optimization of the bid is a one-dimensional optimization problem: when , the maximal SSP expected revenue is

There is no closed form solution in general, but this problem can be solved numerically for example by using Newton’s method.

The difficult step of the algorithm is the update of the posterior, which is explained in the next section.

4.3. Updating the posterior distribution

It would be difficult to sample directly from the posterior distribution , which does not have a simple or tractable form. Even the use of MCMC methods like Metropolis-Hastings would be hazardous, since computing the density of the posterior distribution has a linear cost in the number of past observations which is huge in advertising 111Indeed, the profile of the payoff function induces a posterior distribution that cannot be simplified. Hence, computing the posterior density exactly, cannot be done better than by computing the product of all bayesian updates, which in practice is intractable and rules-out MCMC sampling..

To overcome these difficulties, we approximate the posterior distribution with a particle filter

, a powerful sequential Monte-Carlo method for Hidden Markov Models (HMM). For an introduction on HMM and particle filtering, we refer to

(Cappé et al., 2005)

. The basic idea of a particle filter is to approximate the sequence of posterior distributions by a sequence of discrete probability distributions which are derived from one another by an

evolution procedure (which may include a selection step). The posterior distribution is estimated by a discrete distribution on points called particles. The particles are denoted by and their respective weights by . The evolution procedure and the selection step we use are described below.

A very important strength of the particle filter approach is that it allows to handle non-stationarity: the HMM model encompasses the possibility that the hidden variable (here, the unknown parameter ) evolves in time according to a Markovian dynamic of kernel , thus forming an unobserved sequence . We use this possibility by assuming that the parameter is equal to plus a small step in an unknown direction: this permits to handle parameter drift directly inside of the model.

The theory of particle filters for general state space HMM (Crisan and Doucet, 2000; Doucet et al., 2001; Crisan and Doucet, 2002; Douc et al., 2011) suggests that, in cases such as ours, the particle approximations converge to the true posterior distributions of the parameter when the number of particles tends to infinity.

4.3.1. Evolution: updating the distribution

Recall that we run independent instances of Thompson Sampling, one for each context . Next we focus on one context and recall how to update the particle distribution in the particle filter. To simplify the notation, we write and for the times of two consecutive impressions within context , even if other contexts appeared in between.

The update consists of two steps. First the particles are sampled from a proposal distribution . We then compute new unnormalized weights by importance sampling:

(3)

where is the transition kernel of the hidden process. Here, we may simply take the proposal distribution to be equal to the transition distribution , which yields:

(4)

The normalized weights can be computed as:

4.3.2. Selection: resampling step

The basic update described previously generally fails after a few steps because of a well-known and general problem: weight degeneracy. Indeed, most of the particles soon get a negligible probability, and the discrete approximation becomes very poor. A standard strategy used to tackle this issue is the use of a resampling step when the degree of degeneracy is considered to be too high. We use the following methodology given in (Murphy, 2012) for resampling:

  • Compute to quantify the degree of degeneracy of the particle filter

  • If (

    is a hyperparameter of the particle filter), resample all the particles by sampling

    times with replacement the current set of weighted particles . The result is an unweighted sample of particles, so we set the new weights to

There exist some alternative resampling schemes that could be used: see (Douc et al., 2011) for a presentation of some of them, and for a discussion on their convergence properties and computation cost.


Time and space complexities. Recall that is the number of particles and that is the number of contexts. After each new impression , since only falls into one context , the evolution and selection steps described above need only be carried out for this particular . This thus only requires

elementary operations per impression (including calls to the cumulative distribution function

). As for space complexity, a direct upper bound is

since we need to store weight vectors for each context

.

4.4. Implementation of Thompson sampling

We may now detail our modelling and algorithmic choices for the particle filter within the Thompson sampling algorithm.

Distribution of the highest bids .

We model the highest bids among the other SSPs with a lognormal distribution, a standard choice in econometrics or finance. Lognormal distributions are parametrized by where and Here, and

are respectively location and scale parameters for the normally distributed logarithm

Particle filter.

We write for the parameters of the lognormal distribution associated with context . The particle filter for the posterior distributions works as follows:

  1. In order to handle non-stationarity, we model the parameters

    by Markov chains

    such that and , where are independent Gaussian variables with mean

    and standard deviation

  2. At each time , for each context , we use particles

  3. As explained above, the particles evolve at step according to the same dynamic as the unobserved parameters : and

  4. We use a uniform distribution as prior

    for the parameter , and thus uniformly generate the components of the initial particle Because of the high number of auctions in each context, the choice of the prior distribution has little impact on the result, as long as its support contains the parameter .

  5. Finally, we choose as a resampling threshold criterion.

5. Experiments on RTB auctions datasets

5.1. Constrution of the datasets

In practice, the SSPs generally do not share their bids with one another, and we do not have a dataset with the bids from all SSPs in header bidding auctions. The datasets we have used in these experiments give, for two web publishers, the bids as well as the names of the advertisers in RTB auctions run by a particular SSP over one week, in a setting without header bidding.

For these two web publishers, a dataset giving both the bids in internal auction and the bids from other SSPs in the header bidding auction has been artificially built the following way:

  • All the advertisers competing in the RTB auctions (typically a few dozens) have been randomly assigned to one of two groups of advertisers named A and B

  • In each auction, the bids coming from advertisers in the group A are supposed to be the bids of the internal auction run by the SSP , and the bids coming from advertisers in the group B are supposed to be the bids coming from the other SSPs in the header bidding auction

  • Hence, in a given auction , the closing price of the internal auction is given by the second highest bid from advertisers in the group A, and the highest bid among other SSPs is given by the highest bid from advertisers in the group B

  • The auctions where there are less than two bids from advertisers in the group A or less than one bid from advertisers in the group B have been removed from the dataset

These two datasets are named and thereafter. A brief description is given in Table 1. We give the share of auctions where , which is the share of auctions where the SSP could have won the header bidding auction while generating a positive revenue, by choosing .

Number of auctions 1,496,294 410,840 Number of users 875,634 269,272 Number of ad placements 3,526 31 Share of auctions where
Table 1. Some properties of the datasets and .

The experiments have been performed in the two following configurations:

  • Stationary environment: the data is shuffled. This configuration is used to evaluate the strategy in a stationary environment

  • Non-stationary environment: the data is sorted in chronological order. In this case, the data is non-stationary, as the bids highly depend on the time of the day. This configuration is used to evaluate the strategy in a non-stationary environment

Note that all the bids have been multiplied by a constant.

5.2. Definition of the contexts

In the experiments, we define the context in auction by the closing price of internal auction . The closing price is transformed into a categorical context by discretizing it into disjoint bins.

The -th bin contains all auctions where , where

is the empirical quantile function of the closing prices

estimated on the data. Consequently, each one of the contexts contains approximately the same number of auctions.

The number of contexts should be chosen carefully. A high number of contexts enables to model more precisely the distribution of the highest bid among other SSPs, which is modeled independently on each context, at the price of a slower convergence. In the experiments, we have chosen which yields a good performance on the datasets.

5.3. Baseline strategies

We define in this section the baseline strategies used to assess the quality of the Thompson sampling strategy. They correspond to the use of classical multi-armed bandit (MAB) models (Cesa-Bianchi and Lugosi, 2006). Each arm corresponds to a coefficient applied to the closing price of the internal auction to obtain the bid of the SSP , . Note that this strategy implies that , as the revenue for the SSP can not be positive when .

In each auction , the SSP chooses an arm and bids . Then, it receives a reward equal to , and the rewards associated to the other arms are unknown.

The goal of the SSP is to maximise their expected cumulative reward. In the MAB literature, this reward maximisation is typically defined via the minimisation of the equivalent measure of cumulative regret. The regret is the difference between the cumulative rewards of the SSP and the one that could be acquired by a policy assumed to be optimal. In our case, the optimal policy (or the oracle strategy) consists in playing for each auction the price

We consider two baseline strategies, corresponding to two distinct state-of-the-art policies:

  • the Upper Confidence Bound (UCB) policy (Auer et al., 2002a; Bubeck and Cesa-Bianchi, 2012). Under the assumption that the rewards of each arm are independent, identically distributed, and bounded, the UCB policy achieves an order-optimal upper bound on the cumulative regret;

  • the Exponential-weight algorithm for Exploration and Exploitation (Exp3) policy (Auer et al., 2002b; Bubeck and Cesa-Bianchi, 2012). Without any assumption on the possibly non-stationary sequence of rewards (except for boundedness), the Exp3 policy achieves a worst-case order-optimal upper bound on the cumulative regret.

The number of arms has a high impact on the performance of these baseline strategies. A high number of arms makes the discretization of the coefficient applied to the bid very precise, but slows the convergence as the average reward for each arm is learnt independently. We have used in the experiments.

5.4. Evaluation of the Thompson sampling strategy

This section compares the performance of the Thompson sampling strategy (TS) defined in Sections 4 and 4.4 with the performance of the two baseline strategies (UCB and Exp3) introduced in Section 5.3 on the datasets and .

The performance of a strategy after auctions is measured with the average reward:

Figures 6-6 plot the average reward as a function of on dataset , in a stationary environment (i.e. on the shuffled dataset) and in a non-stationary environment (i.e. on the ordered dataset). Figures 6-6 plot the same results on dataset .

The TS strategy clearly outperforms the baseline strategies EXP3 and UCB in both stationary and non-stationary environments. Moreover, one can observe that the convergence of the TS strategy is faster than that of the EXP3 and the UCB strategies. This convergence speed is expressed in terms of the smallest number of auctions needed by the strategy to reach the overall average reward on the whole dataset.

On the dataset , the average reward with TS strategy is for the stationary case and for the non-stationary case. The corresponding success rates (i.e. the share of auctions won ) are and respectively.

On the dataset , the average reward with TS strategy is for the stationary case and for the non-stationary case. The corresponding success rates are and respectively.

Figure 3. Evolution of the average rewards of TS, UCB, and Exp3 for dataset P1 (stationary environment).
Figure 4. Evolution of the average rewards of TS, UCB, and Exp3 for dataset P1 (non-stationary environment).
Figure 5. Evolution of the average rewards of TS, UCB, and Exp3 for dataset P2 (stationary environment).
Figure 6. Evolution of the average rewards of TS, UCB, and Exp3 for dataset P2 (non-stationary environment)

5.5. Advantages of the Thompson sampling strategy

The main advantage of the Thompson sampling strategy introduced in this paper is that it relies on a random modeling of the highest bid among other SSPs , which is the unknown variable. Then, the revenue function of the problem is introduced explicitly to determine an optimal bid in each auction. In the strategies EXP3 and UCB, the rewards corresponding to each arm are learnt independently whereas they are highly correlated because they derive from a common revenue function.

In addition, as argued above, the use of a particle filter within Thompson sampling permits to handle elegantly a parameter drift, a problem which is still under investigation for classical bandit algorithms. We ran experiments using the non-stationary bandits algorithms of (Garivier and Moulines, 2011), but the results were not better than plain UCB strategies. In contrast, the algorithm proposed above significantly outperforms the classical approaches.

The price for this improvement is an increased computational cost (proportional to the number of particles), and the presence of an additional parameter which controls the intensity of the drift. It must be chosen so as to reach a good tradeoff between accuracy of the discrete approximation and the adaptation to the parameter drift. Experiments show, however, that even a very rough choice of does lead to good performance, and that over-estimating the drift intensity has little impact.

5.6. Discussion on the parameters of the Thompson sampling strategy

5.6.1. Choice of the contexts

As precised in Section 5.2, the number of contexts has a high impact on the performance of the strategy and should be chosen carefully.

In the experiments presented in this paper, we have defined the context as the closing price of the internal auction run by the SSP . This definition of the context is intuitively a good choice, as the result of the internal auction measures the value of the ad space being sold according to the advertisers bidding in this auction. This value is probably highly correlated with the bids of other SSPs for this ad space.

The definition of the contexts could be improved by using characteristics of the ad placement or of the internet user. Experiments show that defining the context as the ad placement does not improve the results.

5.6.2. Choice of the parametric distribution

We chose the lognormal distribution to model the highest bid among the other SSPs both because it is frequently used in practice for online auctions and it fitted our datasets reasonably well. However, when the number of other SSPs is sufficiently large, using the generalized extreme value distributions or the generalized Pareto distributions (Embrechts et al., 1997; Coles, 2001) might be more relevant.

Some preliminary studies we conducted show that Fréchet distributions fit well the sample maxima of the bids within each context. The reason is that such probability distributions are stable and relevant to model and to track the extreme values (sample maxima or peaks over threshold) of independent and identically distributed random variables, whatever the behavior of their tail distributions. In such situations, the associated Thompson sampling strategy could yield even higher cumulative revenues.

5.6.3. Computation time

We have measured the running time (the CPU response time) of the TS strategy using standard computer (P GHz, RAM GB). Updating the full distribution model and estimating the optimal price for an auction requires about ms. This running time is below the limit of ms at which the optimal price must be decided.

Note that the running time is strongly related to the parametric probability distribution modeling the highest bid among other SSPs and to the number of particles used to approximate the corresponding posterior distributions.

6. Conclusion and future work

We have formalized the problem of optimizing the sequence of bids of a given SSP as a contextual stochastic bandit problem. This problem is tackled using the Thompson sampling algorithm, which relies on a bayesian parametric estimation of the distribution of the highest bid among other SSPs. The distribution of the highest bid among other SSPs is approximated with a particle filtering approach. It provides a very efficient way to sequentially update the distribution and sample from it to apply the Thompson sampling algorithm.

The results obtained on datasets artificially built from real RTB auctions show that the Thompson sampling strategy outperforms other bandit approaches for this problem. Also, the estimation of the optimal bid for each impression is fast enough and the strategy can be used in real conditions where a bid prediction must be performed in a few milliseconds. This strategy is currently being developed to be deployed on thousands of web publishers worldwide.

The particle filtering models naturally the non-stationarity of the bid distributions through the hypothesis . This hypothesis should be linked to the non-stationarity of the distributions, as decreasing its standard deviation (named in the paper) enables to forget past observations faster.

In the approach described here, the contexts are modeled independently. The learning speed of the algorithm could be increased by taking into account the correlations between the contexts. In particular, these correlations may be very high when the context is defined by a continuous variable. This point may lead to improvements in the strategy.

Finally, we are planning to explore further how the performance of the strategy depends on the parametric distribution used to model the highest bid among other SSPs .


Aknowledgment: This work was partially funded by the French Government under the grant <ANR-13-CORD-0020> (ALICIA Project).

References