 # A Survey of Adwords Problem With Small Bids In a Primal-dual Setting: Greedy Algorithm, Ranking Algorithm and Primal-dual Training-based Algorithm

The Adwords problem has always been an interesting internet advertising problem. There are many ways to solve the Adwords problem with the adversarial order model, including the Greedy Algorithm, the Balance Algorithm, and the Scale-bid Algorithm, which is also known as MSVV. In this survey, I will review the Adwords problem with different input models under a primal-dual setting, and with the small-bid assumption. In the first section, I will focus on Adwords with adversarial order model, and use duality to prove the efficiency of the Greedy Algorithm and the MSVV algorithm. Next, I will look at a primal-dual training-based algorithm for the Adwords problem with the IID model.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Nowadays, almost all of the technology companies, such as Google and Yahoo, need to face a combinatorial problem of assigning user queries to advertisers to maximize the total revenue. Specifically, a search engine might receive offers from different companies: each month, company wants to pay at most dollars in advertising; Company is willing to pay if its ad appears when word is searched. The goal of the search engine is to maximize the profit of the search engine, which equals to the total amount of money spent by advertisers.

### 1.1 Combinatorial Formulation

Mathematically, we are given a set of nodes , with each having budget . At each time : A node arrives, bids are displayed, and we must decide either to match to and earn min(), or not to match to any node (i.e., no ad is shown when is searched). stands for the remaining budget for node . We aim to maximize the total amount of money spent by advertiser.

### 1.2 Duality Review

Before I dive into the IP formulation of the problem, I first give a quick review of primal-dual formulation and duality theorem.
For every LP in the form

 max cTx s.t. Ax⩽b x⩾0,

call this the primal problem, there always exist a dual problem associate to it in the following form

 min bTy s.t. ATy⩾c y⩾0.

We have also seen in class that for any feasible solution of the primal , and any feasible solution of the dual . In other words, the value of any feasible solution to the dual yields an upper bound on the value of any feasible solution to the primal. In addition, if is the optimal solution to the primal and is the optimal solution to the dual, we have (the costs/object value coincide).

Now, what remains to be crucially important in this paper is the complementary slackness: Let be the nonnegative primal and dual slacks. Then (obtain optimal solution), iff and . In other words, if for some , then the corresponding dual slack is 0 (constraint is tight), i.e., , and if the primal slack is nonzero (constraint not tight), i.e., , then corresponding dual variable . Similar logic follows for dual variable and dual slack.

### 1.3 Primal-dual Formulation

Buchbinder et al. studied the Adwords problem with the small bids assumption using the Primal-Dual approach. For now we assume the Adwords problem is offline, which means that the complete bipartite graph is available to us and we can solve the optimal solution. We first look at the primal problem. The formulation is as follows:

 max∑u,vxuvwuv max wTx s.t.∀u:∑vxuvwuv ⩽Bu s.t. Ax⩽b ∀v:∑uxuv ⩽1 A=[wTeT] b=[Bue] xuv ⩾0

Note if we match to and earn , and otherwise. The first constraint states that the total amount company pays cannot exceed its budget, and the second constraint says that each person can only be shown to at most one ad. The solution could be fractional but rounding-off lose very little in the object value.
We can construct the dual problem:

 min∑uBuαu+∑vβv min bTy s.t.∀u,v:wuvαu+βv ⩾wuv s.t. ATy⩾w αu ⩾0 ∀u βv ⩾0 ∀v

Remark: by complementary slackness, we know that (1): If for some i, then the corresponding dual constraint is satisfied at equality, i.e., ; (2): If for some , then the corresponding primal constraint is satisfied at equality, i.e., ; (3) If , then . Therefore the optimal primal and dual solutions must satisfy the following conditions:

 xuv>0 →wuv(1−αu)=βv (1) αu>0 →∑vxuvwuv=Bu (2) ∑vxuvwuv0 →∑uxuv=1

Since the first constraint of the primal problem gives us , comparing to equation (1), we can conclude that
Rule (1): is allocated (ie., ) to the bidder who maximizes the scaled bid .
In other words, given only the optimal dual variables , one can reconstruct the optimal primal solution by assigning each to the bidder who maximizes . Equation (2) and (3) tells us that
Rule 2: is positive only for bidders who have finished their budget and remains to be 0 if budget not finished.

In the online Primal-dual problem, one can utilize these two rules, i.e., maintain a best estimate for the optimal dual variables

and use the first rule when deciding which to assign.

## 2 Online Greedy Algorithm and MSVV Algorithm

### 2.1 Greedy Algorithm

I first provide the Greedy Algorithm for the online Adwords problem:

 Initialize:αu=0 ∀u,βv=0 ∀v,xuv=0 ∀u,v. When the next vertex v∈Varrives: If v has no available neighbors, continue Match v to that available neighbor u∗ which % maximizes biduv Update: α∗u={1if u∗ has finished its budget0if u∗ has not yet finished its budget βv=bidu∗v⩾biduv xu∗v=1

Note that the because of the small-bid assumptions, when each arrives, we can assume that , i.e., taking will never exceed the remaining budget of . Also note that since every available vertex has (hasn’t finished its budget), it is the same as saying ”match to the available neighbor which maximizes .”

###### Theorem 1.

Greedy achieves a ratio of for the Adwords problem with the small-bids assumption.

I prove this theorem by showing that at the end of the algorithm, the solution is feasible for both the primal and the dual problem, and the primal objective value is at least one half of the optimal value. I start by proving a proposition:

###### Proposition 2.

For the Adwords problem with the small-bids assumption, if at the end of the algorithm, the primal objective is at least times the dual objective, then the algorithm achieves a ratio of of the optimal solution.

###### Proof.

Define as the primal objective value, as the dual objective value, as the optimal dual value, and as the optimal objective value. We have the following:

.

The first inequality is given. Since the dual problem is minimization, we must have for to be optimal. By strong duality, we also have . ∎

We now prove Theorem 1:

###### Proof.

(1) Feasibility: I first prove the algorithm outputs a feasible solution through demonstrating that the primal and the dual problems are both feasible. First the primal solution is feasible since we only allocate to the bidder who has not finished its budget yet, and allocate to only one bidder. For the dual problem constraint, at the end of the algorithm, if has finished its budget, we know and clearly . If still has budget remaining, we need to show . However, at each iteration as we asign , we pick such that maximizes and update . Therefore, .
(2) Ratio:By proposition 2, we need to show that the objective is at least of the dual objective. At each iteration when shows up, we allocate it to the optimal , and two cases might occur: has finished its budget by spending , or still has budget available. In the latter case, the primal objective increases by , and the dual objective also increases by . In the former case, both objective increase by , and in the dual objective function, also increases from 0 to 1. Therefore we add where is the set of all asigned to . Thus we count all of the again for all of the previously allocated to . The worst case being that all have finished their budget and the dual objective value is twice as the primal one. In other words, the primal objective is at least times the dual objective. ∎

### 2.2 Online MSVV Algorithm

In the greedy algorithm, we set the dual to 0 or 1. Similarly, we can find the best online dual variables as a function of the fraction of budget spent in the MSVV algorithm. For simplicity we take all budget . Define

=fraction of budget of that has been spent so far

I now provide the online MSVV algorithm:

 Initialize:αu=0 ∀u,βv=0 ∀v When the next vertex v∈Varrives: If v has no available neighbors, continue Match v to that available neighbor u∗ which % maximizes kwuv−Δuv(xu) Update: αu=αu+Δuv(x∗u) βv=kwuv−Δuv(x∗u)

Remark: Note that while deciding which we assign to, we have

,

where stands for the remaning budget of . Therefore it is equivalent to assign the incoming node to a neighbor that maximizes , which we have seen in class.

###### Theorem 3.

Online MSVV algorithm achieves ratio for the Adwords problem with the small bids assumption.

Again I prove the theorem by showing feasiblity and the primal-dual ratio. I first prove another proposition:

###### Proposition 4.

At any point in time, .

###### Proof.

WLOG, given , suppose at any given time , were allocated to so far in that order, and let be the spending of up until the times when arrives. Note that the cumulative spending of as arrives equals to the sum of all previous costs/bids (), where . Thus we have . According to our algorithm, at the time has been assigned to , . Thus we have

We have the second equality because we assume that every piece of is infinitely small, and thus can repleace it by an integral. ∎

###### Proof.

(1) Feasibility: First, the primal solution is feasible since we only assign to who has not finished its budget. Now I show the feasibility for the dual problem, i.e., at the end of algorithm: , or equivalently . When an arbitrary arrives, suppose we assign it to , and let be the fraction of budgets spent by . Let be some other bidders that is not optimal and let be the fraction of budgets spent by . Note since in each iteration we pick that maximizes the value of the expression. Let be the fraction of budget spent by at the end of the algorithm . We have

 βv =kwu∗v−Δu∗v(xu∗) (by definition of β) ⩾kwuv−Δuv(xu) =wuv(k−kexu−1)=1−exu−1e−1 =wuv(1−f(xu))⩾wuv(1−f(Xu) (by % monotonicity of f(x)) =wuv(1−αu)

(2) Primal-Dual Ratio: In each iteration, the primal objective increases by , and the dual objective increases by . Therefore at the ned of the algorithm, the primal objective is at least times the dual objective, by Proposision 2, the proof is complete. ∎

## 3 Primal-dual Training-based Algorithm

In this section, I first introduce the general class of packing linear programs (PLP), and then introduce the primal-dual training-based algorithm and prove that it achieves ”

” approximation for the online stochastic PLP problem with high probability with mild assumptions. The ”

” approximation/competitive means that with high probability under the randomness in the stochastic model, the algorithm achieves at least fraction of the objective value of the offline optimal solution () for the actual instance.

### 3.1 The General Class of PLP

Let be a set of resources, and each resource has a capacity . The set of resources and their capacities are known in advance. Let be a set of agents that arrive one by one online, each with a set of options . Each option of agent has an associated value and requires units of each resource . The set of options and the values and arrive together with agent . When an agent arrives, the algorithm has to immdediately decide whether to assign the agent and which option to choose. The goal is to find a allocation that does not allocate more of any resource than is available.
Remark 1: Comparing to the AdWords problem, we can see that stands for the set of companies/advertisers, each of which has budget available. is the set of nodes arrives. As arrives, it has a set of options . Each option has an associated value .
Remark 2: Since PLP is a generalization of AdWords problem, where the key difference being that and are unrelated in PLP, this algorithm also applies to AdWords problem, for which the proof is even simpler.
We only need to adjust the formulation in Section 1.2 a little bit to adapt the PLP. For convenience, we manipulate the first constraint of the primal: we divide both sides by , and the right-hand side becomes 1, the left-hand side becomes . We use notation representing below for simplicity. Also note that and . The primal and dual follows:

 max∑i∑o∈Oixiowio min ∑jαj+∑iβi ∀j:∑ixioaioj ⩽1 ∀i,o:aiojαj+βi ⩾wio ∀v:∑o∈Oixio ⩽1 αj,βi⩾0 xio ⩾0

### 3.2 Training-Based Primal-Dual Algorithm (PTAS)

We define to be the total number of agents, as the number of constraints, as the maximum number of options for any agent. Finally, define the from option as . Recall in the MSVV Algorithm, we match to which maximizes . In other words, is the maximum we obtain as each comes. Also recall that . Therefore, the in PLP, if applied to Adwords Problem, is (we have already normalized ). We can see the nice analogy between , , and what we have seen in class.

The main idea behind the algorithm is to solve the LP on a sample of the queries. Note that we cannot expect to get a representative sample of all types of agents from a small sample. In other words, we cannot expect to estimate the ditribution of and . We then use only the dual variables and from the sampled LP to guide us for solving the rest of query stream.
Algorithm:
1. Let denote the first agents in the sequence. We do not select these agents for the purpose of analysis. May assign them according to some online algorithm. Let denote the sampled version of the dual program on the agents in S with following changes: for all , replace by , which is equivalent to reducing the capacity of a constraint from 1 to (i.e., change the right-hand side of the first constraint to ).
2. Solve , let denote the value of the dual variable for constraint in this optimal solution.
3. For each subsequent agent , select the option that provides maximum gain, and set . Let , Note serves as a cost/price per unit of the resource for the remaining agents.

### 3.3 Theorem and Proof

###### Theorem 5.

The Training-Based Primal-Dual algorithm is -competitive for the online stochastic PLP problem with high probability, as long as folloing conditions hold: (1) and (2) .

Remark: Note that the first condition means that no individual option provides too large a fraction of the total value, and the second condition is equivalent to say that no individual option for any agent consumes too much of any resource.

The key idea behind the proof is that if satisfies the complementary slackness condition on the first agents (being an optimal solution for our sample), then w.h.p it approximately satisfies these conditions on the entire set. I begin the proof by introducing some key definitions.

###### Definition 1.

Let denote the set of agents with some option having non-negative gain, i.e., the set of agents that will be allocated to some resource (we do not assign any agent that provides negative gain). Let denote the set of pairs , i.e., the set of best possible allocations between all agents and all resources . Let the best possible allocation for the sampled queries. Consequently, we have represents the allocation of options selected by our algorithm.

Remark: For the purpose of analysis, our algorithm do not select any options for agents in

. In our algorithm, given a vector

, by selecting for each agent in , the option that maximizes for , i.e., and set , we obtain a feasible solution to the Dual-LP. Because of the analogy between in MSVV and , the proof is analogous to the feasibility proof for theorem 3 and is omitted here.

###### Definition 2.

Let be the total weight of selected options, and be the total weight of selected options of the sample. Let be the total consumed resources for and similarly .

For any fixed vector , and each are independent of the choice of the sample S. Thus the expected value of is , and that of is . Also note that after implementing some online algorithm for the sampled queries, the result should be close to and .

###### Definition 3.

For a sample and , let , and let . When refering to the sample, we abbreviate as and as .

1. The sample S is if:

.

2. The sample is if:

Now I prove Theorem 5, our central theorem. For the following parts, we assume the condtions of Theorem 5 hold. We prove Theorem 5 by proving the following two propositions:

###### Proposition 6.

If the sample S is not t-bad or for any constraint , the value of the options selected by the algorithm is .

###### Proposition 7.

The sample has high probability to be good for any fixed , i.e., not t-bad or -bad for any .

Therefore, the combination of the two propositions tell us that, with high probability, our algorithm returns a feasible solution with value at least , completing the proof of Theorem 5.

I first prove Proposition 6 by first proving another lemma.

###### Lemma.

Let be a constraint such that . If is not , under constraints of theorem 5, we have .

###### Proof.

Given is not , we have . Consequently . By our assumption in theorem 5, we also have . Hence by substituing , we have

Then

 1−C(j) ⩽ϵ2+2ϵ√C(j) C(j)+2ϵ√C(j)+ϵ2 ⩾1 (√C(j))+ϵ)2 ⩾1 √C(j))+ϵ ⩾1 C ⩾1+ϵ2−2ϵ⩾1−2ϵ

The upper bound proof is similar. ∎

Now I prove Proposition 6:

###### Proof.

Let be the value of the feasible dual solution obtained by setting for each . By weak duality, the dual objective value serves as an upper bound on the optimal solution . Showing that suffices to prove the proposition. Let denote the set of constraints such that , and be the set of constraints such that . Propsition 6 implis that if for some , the capacity of has been used up, i.e., , then recall Rule 2 by complementary slackness, which guides through all the algorithms, should be positive, i.e., in set . Therefore, for each constraint , complementary slackness and previous lemma imply that if is not -bad, then . Also for , recall that we allocate agent to that maximizes , i.e., . Again by complementary slackness, we have the slack variables in the Dual-LP constraints equal to 0, i.e., . Now

Since the options for agents in were not selected by our algorithm, the total value obtained by the algorithm is . Since is not t-bad, we have by Definition 3, . The first assumption of Theorem 5 gives us . We then have . Hence by substitution

Therefore

Now I prove Proposition 7 by first proving another two lemma.

###### Lemma.

Pr for each , and Pr.

The proof is simple and thus omitted here. The details of the proof could be found in . This lemma implies that for any fixed , the probability that a random sample of agents is bad is less than . This is because is bad if any is -bad or t-bad. Therefore .

###### Lemma.

There are fewer than distinct solutions that are returned by the algorithm after step 2 of our algorithm.

###### Proof.

Recall that an optimal (vertex) solution to the Dual-LP on the reduced instance is defined purely by the n-dimensional vector The vertex solution is decided by picking constraints, set them to equality, and solve the system. For each of the agents, there are at most such constraints, and thus at most constraints in total. Consequently, we are picking of them from at most of them, and thus there are at most possible combinations and thus vertices of the polytope defined by optimal solutions . ∎

I now prove Proposition 7:

###### Proof.

The first lemma imples that for any fixed , the probability that a random sample of agents is bad is less than . The second lemma tells that there are at most distinct choices for . Therefore the probability that is bad is less than , i.e., the sample is good for any with high probability. We then finish proving Proposition 7. ∎

We then complete the proof of our central theorem, Theorem 5.
Remark: As we can see from the whole proof, the accuracy of the Training-Based Primal-Dual algorithm is bounded by two components: the strictness of the inequaility of the two conditions in Theorem 5, i.e., the smaller the value of the right-hand side of two inequalities, the more likely the algorithm would perform better on the remaning agents; and the accuracy of the solved for the sampled dual. As increases, the sample we study becomes larger, and is more accurate. However, as increases, the bound of the and becomes less strict, and therefore the performance of the algorithm decreases. In general, the former effects outweights the latter one, as long as the sample we study is not too small, making it competitive. We need to find a balance between the two effects and choose not too large and not too small.

## References

•  N. Buchbinder, K. Jain, and J. Naorr, “Online primal-dual algorithms for maximizing ad-auctions revenue,” in ESA, pp. 253–264, (2007).
•  Jon Feldman & Monika Henzinger & Nitish Korula & Vahab S.Mirrokni & Cliff Stein. “Online Stochastic Packing applied to Display Ad Allocation”, (2010).
•  Aranyak Mehta, “Online Matching and Ad Allocation,” Foundations and Trends in Theoretical Computer Science, pp. 265-368, (2012).