1 Introduction
In the past decade, the TCS community has made radical progress developing the theory of multidimensional mechanism design. In particular, it was previously wellunderstood the optimal multiitem auctions are prohibitively complex, even with just items, and even subject to fairly restricted instances [BCKW15, HN13, HR15, Tha04, Pav11, DDT17]. Yet, starting from seminal work of Chawla, Hartline, and Kleinberg [CHK07], a large body of work now proves that simple auctions are in fact approximately optimal in quite general settings [CHMS10, CMS15, HN17, LY13, BILW14, Yao15, RW15, CM16, CZ17, EFF17b], helping to explain the prevalence of simple auctions in practice. Still, it would be a reach to claim that this agenda is convincingly resolved.
In particular, the thought of settling for (or even ) of the optimal achievable revenue may be a nonstarter for highstakes auctions. Indeed, there are no hard constraints forcing the auctioneer to use a simple auction. Still, Priorindependent auctions are desirable since they don’t require the auctioneer to understand the population from which consumers are drawn. Deterministic and Dominant Strategy Truthful auctions are desirable because consumers’ strategic behavior is easier to predict. Computationally tractable auctions are desirable because they can be efficiently found. On the other hand, it is hard to imagine that auctioneers stand a hard line on simplicity if additional market research or outsourcing computation would increase revenues, even modestly.
The resource augmentation paradigm takes a different view: spend effort recruiting additional bidders rather than carefully designing a superior auction. We are therefore interested in answering the following question: for a given auction setting, how many additional bidders are necessary for a simple auction (with additional bidders) to guarantee greater expected revenue than the optimal (without)? Eden et al. term the answer to this question the competition complexity [EFF17a].
This question was first studied in seminal work by Bulow and Klemperer in the context of singleitem auctions [BK96]. Remarkably, they show that just a single additional bidder suffices for the secondprice auction to guarantee greater expected revenue than Myerson’s optimal auction [Mye81] (without the additional bidder), subject to a technical condition on the population called regularity. For multiitem auctions, similar results have even more bite, as the optimal multiitem auction is considerably more complex than Myerson’s (which is deterministic, dominant strategy truthful, and computationally tractable, but not priorindependent). Our main result is optimal bounds on the competition complexity for the core setting of additive bidders with independent items. Specifically,
Main Result: The competition complexity of bidders with additive values over independent items is at most , and also . When the first bound is tight (up to constant factors). When , the second bound is tight (up to constant factors) for any argument that starts from the benchmark introduced in [EFF17a].
1.1 Brief Technical Overview
Formally, we consider bidders drawn independently from a distribution . We study the nowstandard setting of additive bidders over independent items: each bidder’s value for item is drawn independently from some singlevariate distribution , and her value for a set of items is . The simple mechanism we study is to sell the items separately, either via the secondprice auction in the case of regular distributions, or Myerson’s optimal singleitem auction in the general case.^{1}^{1}1For irregular distributions, it is known that no guarantees are possible with priorindependence, even for a single item. The example to have in mind is a distribution with a point mass at
with probability
and otherwise: as , any auction that achieves revenue close to optimum must sell the item to the bidder with value for price close to whenever there is exactly one. It is impossible to have a single auction that does this for all . Observe that, since the bidders are additive and values are independent, selling the items separately is really just separate singleitem problems. We are interested in understanding the minimum such that selling separately to bidders drawn from yields greater expected revenue than the optimal mechanism with bidders drawn from for any .Our approach starts from the benchmark proposed in [EFF17a]. That is, Eden et al. propose an upper bound on the optimal achievable revenue with bidders drawn from via the duality framework of [CDW16].^{2}^{2}2We note that this upper bound can also be derived without duality using techniques of [CMS15]. We defer a definition of this benchmark to Section 2.2: it defines a Virtual Value of a bidder with values for item , and upper bounds the optimal expected revenue with . We defer most details to the technical sections, but briefly note that at this point, our analysis diverges from prior work. Eden et al. use an elegant coupling argument to connect this benchmark to the expected revenue of selling separately with additional bidders [EFF17a]. The highlevel distinction in our approach is a significantly more indepth analysis of this benchmark. In particular, our analysis makes more extensive use of Myersonian virtual value theory (Sections 3 and 4), which reduces the problem to questions purely regarding whether various methods of drawing correlated values from stochastically dominate one another (Section 5).
1.2 Connection to Related Works
The two works most directly related to ours are [EFF17a] and [FFR18]. The onesentence distinction between our results and these is that we strictly improve their main results regarding selling separately to additive bidders with independent items, but do not address alternative settings. For example, this paper contains no results beyond additive bidders (considered in [EFF17a]), or results for mechanisms aside from selling separately (considered in [FFR18]).
“Little Regime”: For additive bidders with independent items, Eden et al. [EFF17a] prove a competition complexity bound of . Feldman et al. [FFR18] prove that selling separately to additional buyers exceeds a fraction of the optimal revenue (without the additional buyers). Our main result essentially achieves the greatly improved bound of [FFR18] (and improves it further) without losing any revenue: we prove a competition complexity bound of . This guarantee is tight up to constant factors (and remains tight even if one is willing to lose an fraction), due to a lower bound of [FFR18].
“Big Regime”: For additive bidders with independent items, Eden et al. [EFF17a] prove a competition complexity bound of . Feldman et al. [FFR18] prove that for any , there exists a constant such that if , selling separately (without any additional bidders) achieves a fraction of the optimal revenue. Our main result improves the guarantee of [EFF17a] to and also implies the result of [FFR18] (with . Note in particular that any sublinear competition complexity bound implies the [FFR18] result for a different , but that linear bounds do not. So our improvement from linear to sublinear is significant in this regard. Moreover, we show in Section A that this is tight (up to constant factors) for any approach starting from the benchmark proposed in [EFF17a]. We further show (also in Section A) that there does not exist any function only of upper bounding the competition complexity: as the competition complexity approaches as well (at a rate of at least ).
Other works that study the competition complexity of auctions include seminal work of Bulow and Klemperer, who study the case, work of Liu and Psomas (who study the competition complexity of dynamic auctions) and Roughgarden et al. (who study the unitdemand setting) [BK96, LP18, RTCY12]. These works are thematically related, but both the results and techniques have little overlap.
Some of the aforementioned works which prove approximation guarantees for simple mechanisms use similar techniques to derive a tractable benchmark that upper bound on the achievable revenue [CHK07, CHMS10, CMS15, HN17, LY13, BILW14, Yao15, RW15, CM16, CZ17, EFF17b]. However, it is worth noting that all of these works proceed by immediately splitting the benchmark into multiple simpler terms and finding approximately optimal mechanisms to cover each term separately. The best of those mechanism guarantees approximate optimality to revenue. This greatly simplifies analysis, at the cost of an additional constant factor. Because competition complexity results target the full original revenue, losing this initial constant factor can make future analysis impossible. As a result, while benchmarks may be shared by these lines of work, analysis of the benchmarks is often quite different.
Finally, it is worth noting that recent work follows two approaches to derive revenue upper bounds in these works. Some (including this paper) use virtual value theory [CHK07, CHMS10, RTCY12, CMS15, CDW16, CZ17, EFF17a, EFF17b, LP18, FLLT18]. Others use a more direct probabilitistic approach [HN17, LY13, BILW14, Yao15, RW15, CM16, BGN17, FFR18]. For the most part, similar approximation guarantees are achievable through both approaches. With respect to these lines of work, our results (which yield exact competition complexity bounds) in comparison to those of [FFR18] (which lose an arbitrarily small ) suggest the virtual value approach may be desirable if one cares about small losses.
1.3 Roadmap
Our main result tightly characterizes the competition complexity in the litte regime, and tightly characterizes the competition complexity in the big regime among proofs which use the same benchmark as [EFF17a].
In Section 2, we provide the necessary preliminaries surrounding the benchmark of [EFF17a] and virtual value theory. In Section 3 we provide a nearcomplete proof of our results when as a warmup. In Section 4, we analyze the benchmark and reduce the analysis to proving stochastic dominance of certain correlated random variables drawn from . In Section 5 we prove the required claims regarding stochastic dominance (which at this point are purely mathematical claims and no longer reference auctions). In Appendix A we: (a) recap the lower bound of [FFR18] witnessing that our results are tight in the little regime, (b) provide a lower bound witnessing that our results are tight in the big regime (among proofs which use the same benchmark as [EFF17a]), and (c) prove that the competition complexity of bidders with additive valuations over independent items approaches as .
2 Notation and Preliminaries
We consider a setting with i.i.d. bidders with additive valuations over independent items. That is, there are singlevariate distributions for all , and bidder ’s value for item is drawn independently from . Bidder ’s value for the bundle is just . We will use the following notation:

to denote the revenue of the optimal (possibly randomized) Bayesian Incentive Compatible^{3}^{3}3A mechanism is Bayesian Incentive Compatible if it is in every bidder’s interest to bid truthfully, conditioned on all other bidders bidding truthfully as well. That is, assuming that all other bidders submit bids drawn from , bidder best responds by bidding their true values. mechanism when played by bidders whose values for items are drawn from . In our setting, we will always have for some singlevariate distributions .

to denote the revenue achieved by the VCG mechanism when played by bidders whose values for items are drawn from . In our setting, the VCG mechanism simply runs a secondprice auction on each item separately with no reserve.

to denote the revenue achieved by Myerson’s mechanism run separately on each item, when played by bidders whose values for items are drawn from . Note that for all and distributions over additive valuations, .
2.1 Myerson’s Lemma, BulowKlemperer, and Virtual Values
Here, we briefly recap basic facts about the theory of virtual values due to Myerson [Mye81]. We include some proofs and sketches in Appendix C, and refer the reader to [Har11] (Definition 3.11) for a deeper treatment of these concepts (or [CDW16], Definition 8 for discrete distributions). Note that much of the theory extends to independent (but noni.i.d.) bidders with slightly more complex statements. As we only consider i.i.d. bidders, we omit the extra notation. Below, when we write for a random variable , we mean .
Definition 1 (Virtual Values and Ironing [Mye81]).
For a continuous singlevariate distribution with CDF and PDF , the virtual valuation function satisfies . If is monotone nondecreasing, is said to be regular. If not, is the ironed virtual value function, and is monotone nondeceasing (see [Har11] for a formal definition). When is regular, .
Theorem 1 ([Mye81]).
Let be any singlevariate distribution. Then for all :
Fact 1.
For any singlevariate distribution , and any value , let denote the distribution conditioned on exceeding . Then .
Finally, we recall the seminal result of Bulow and Klemperer [BK96]:
Theorem 2 ([Bk96]).
For any regular singlevariate distribution , .
2.2 Duality Benchmarks
Here we state an upper bound on when is additive over independent items. The bound is derived using the duality framework of Cai et al. [CDW16], and first used by Eden et al. [EFF17a] (it is also possible to derive this particular bound without duality [CMS15]). When referring to this benchmark in text, we call it the EFFTW benchmark. Parsing the benchmark requires additional notation:

denotes the value of bidder for item .

denotes the marginal distribution of item . We use to denote .

For a variable , if has no pointmasses, then we simply define . If with strictly positive probability, then we define to be a random variable drawn uniformly from the interval . Importantly, note that the random variable is drawn uniformly from for any random variable .

For a distribution , we partition the space into disjoint regions. For each , we define . That is, is in region if item has the highest quantile. Observe that his partition may be randomized if has point masses (and is deterministic with probability if has no point masses).
If we think of the Virtual Value of bidder for item as equal to Myerson’s ironed virtual value, , if item has the highest quantile in , and equal to the value, , if not, then Theorem 3 claims that the expected revenue of the optimal mechanism does not exceed the sum over all items of the expected maximum virtual value for that item. Theorem 3 is an application of Corollary 28 in [CDW16], together with the observation that our defined regions are upwardsclosed.
3 WarmUp: Single Bidder
In this section, we illustrate one portion of our improved anlaysis via the single bidder setting. This will also help identify one significant point of departure from [EFF17a]. Observe that the EFFTW benchmark simplifies significantly for a single bidder, as there is only one element of , and the benchmark simply sums the virtual value of the item with the highest quantile plus the values of all other items.
3.1 Brief Recap of [Eff17a]
The main idea in the singlebidder approach of [EFF17a] is to couple draws of bidders for item with draws of a single bidder for items via their quantiles. Specifically, they observe the following: consider fixed quantiles drawn independently. and uniformly from .

Benchmark Analysis: Use the quantiles drawn to determine values for each of items. If is the largest quantile drawn, then item contributes to the benchmark. If is not the largest quantile drawn, then item contributes to the benchmark.

VCG Analysis: Use the quantiles drawn to determine values of each of bidders for item . If is the largest quantile drawn, then bidder contributes to the virtual surplus of VCG. If is not the largest quantile drawn, then some other bidder wins the item and pays at least , so at least is contributed by some bidder to the revenue.
The above reasoning is not far from a formal proof that . Some care is required to make sure Theorem 1 is applied correctly (since we wish to count bidder ’s contribution to the revenue of VCG using her ironed virtual value but the other bidders’ contributions directly via payments), but the above reasoning is the key step. The main idea is that if we couple the quantiles drawn for the benchmark with quantiles drawn for selling just item , then the revenue achieved from selling just item to bidders drawn from exceeds the contribution of item to the benchmark for all quantiles drawn.
3.2 Our Analysis
The main challenge that the previous analysis overcomes is the following: the contribution of item to the benchmark is sometimes in the form of a virtual value, and sometimes in the form of a value. There is no “natural” random variable that takes exactly this form, and it is tricky to analyze directly. So the previous analysis finds a clever way to “recreate” it using this coupling argument. Unfortunately though, direct coupling arguments like this should not hope to prove a competition complexity better than , as there are random variables that need to be coupled.
Our approach instead is to reason about the contribution of item to the benchmark exclusively in terms of virtual values, using Fact 1. Specifically, consider the following proposition, which rewrites the contribution of item to the benchmark. Below, denotes the following random variable: first, draw one quantile uniformly at random from . Then, draw quantiles uniformly at random from and label them thru . If for all , then set . Otherwise, let denote a uniformly random element from and set .
Proposition 1.
For all and all items , .
Proof.
The main idea is to get a lot of mileage from Fact 1: ideally, any time , rather than contribute to the benchmark, we will contribute the virtual value of a random draw from conditioned on exceeding . To begin, let’s couple quantiles drawn for the benchmark with quantiles drawn for the experiment defining so that and for , and (if , otherwise there is no to define). Observe that indeed the quantiles are all drawn independently and uniformly from . Moreover, we have:

Whenever , . Therefore, we conclude that:
(1) 
Conditioned on , is a uniformly random sample from . This is because there is some strictly positive number of s such that . Conditioned on being , each such value is drawn uniformly from . And then picks one of them uniformly at random. Using Fact 1, we therefore conclude that:
(2)
It is now easy to see that the lefthand sides of the two equations sum together to yield item ’s contribution to the benchmark, while the two righthand sides sum together to yield (at most) , proving the proposition. ∎
Proposition 1 gives an upper bound on the contribution of item to the benchmark written as the expectation of a virtual value of some distribution (). This is convenient because we can write the revenue achieved by using Myerson’s optimal auction for selling item to bidders as the expectation of a virtual value of another distribution (the maximum of draws from ). Therefore, if we can relate these two distributions (for instance, by proving that one stochastically dominates the other), we can relate these two expectations. Below, let denote the maximum of
i.i.d. draws from the uniform distribution on
.Corollary 1.
If stochastically dominates , then for all that are additive over independent items, .
Proof.
Observe first that by Theorem 1 we have:
Observe that is a monotone nondecreasing function, and is also monotone nondecreasing. As such, if stochastically dominates , stochastically dominates , which allows us to conclude that . Therefore, we may conclude that if stochastically dominates , . ∎
At this point, we’ve reduced the problem of deriving competition complexity upper bounds to a purely mathematical problem relating stochastic dominance of and . The proof of this claim for is not an especially instructive special case, so we defer the final step to Section 5. So we wrap up our warmup by citing Theorem 8:
Corollary 2 (of Theorem 8).
When , stochastically dominates .
Theorem 4.
Let be a distribution that is additive over independent items. Then . If each is regular, then also .
Proof.
Theorem 3 upper bounds with the EFFTW benchmark. Proposition 1 further upper bounds the EFFTW benchmark with , which is the sum over all items of the expected virtual value of a quantile drawn from . Corollary 1 argues that if is stochastically dominated by (the maximum of i.i.d. draws uniformly from ), then we may replace with in the upper bound, which is exactly . Finally, Corollary 2 claims that indeed stochastically dominates when (and the final when each is regular comes from going from SRev to VCG using BulowKlemperer). ∎
This concludes our exposition for a single bidder. Above we introduced one new idea which departs from prior work: instead of directly treating the benchmark which involves both values and virtual values, rewrite the benchmark to involve only virtual values and reduce the problem to purely mathematical questions about stochastic dominance of and .
4 Multiple Bidders
In this section, we overview our approach for the general case. The key simplifying feature of the singlebidder case that allowed us to isolate one key idea is that for each item , that item has the highest quantile or it doesn’t. In the multibidder case, there are multiple bidders, some of whom will have their highest quantile for item , some of whom won’t. So we must actually engage with the “” in the benchmark. Our approach will be different depending on whether is big or little relative to . We begin with the little case as it is more similar to the singlebidder case.
4.1 Part One: When
Our key step is conceptually similar to Proposition 1, but the random variables involved are necessarily more complex. We first make the following observation (also made in [EFF17a]). Below, denotes the highest value for item (among all bidders). All omitted proofs can be found in Appendix D.
Observation 1.
Next, we want to rewrite the righthand side above using random variables simliar to . This time, let denote the following random variable: first, draw quantiles independently and uniformly at random from . Relabel them so that . Then, draw quantiles uniformly at random from and label them thru . If for all , then set . Otherwise, let be a uniformly random element from and set .
Proposition 2.
For all , and all items :
Proposition 2 helps us replace any instances of in the benchmark with a randomly drawn virtual value, but we still need to do the same for (which so far has essentially just been rewritten as ). Now, let be a uniformly random draw from , and define . By making use of Fact 1, we can conclude:
Corollary 3.
Now, we are nearly ready to wrap up the case. Similarly to the singlebidder case, define to be the maximum of i.i.d. draws uniformly form .
Corollary 4.
If stochastically dominates , then . If each is regular, then .
Finally, Theorem 8 claims that when , indeed stochastically dominates . Combining Corollary 4 with Theorem 8 therefore concludes:
Theorem 5.
For all that are additive over independent items, . If each marginal of is regular, then .
When , this is tight up to constant factors, due to a lower bound of [FFR18] (see Appendix A for the construction). But when , this is still linear in . We therefore provide an alternative argument in the following section which achieves the optimal (up to constant factors) competition complexity that is achievable starting from the EFFTW benchmark of .
4.2 Part Two: When
At a high level, the main difference between how we should analyze the case and the is as follows: Observation 1 immediately upper bounds the EFFTW by upper bounding with . When , this upper bound is unlikely to be much of a relaxation, because it’s likely that anyway. But when , we’re extremely unlikely to have , and this upper bound is wasteful. Indeed, this step is what limits the analysis in [EFF17a] to . The first step for the case is to address this.
Proposition 3.
For all items , all , and all distributions that are additive over independent items:
We now want to take a simliar step to the previous case and replace with a randomly drawn virtual value using Fact 1. Here, define the random variable as follows. First, draw independently and uniformly at random from . Then, randomly draw uniformly from , and set .
Lemma 1.
For any singledimensional distribution , and any :
Corollary 5.
If stochastically dominates , then for any that is additive over independent items, . If each marginal is regular, then .
Finally, Theorem 7 states that stochastically dominates whenever . Setting , we get .
Theorem 6.
For all that are additive over independent items, . If each marginal is regular, then . In particular, if , .
5 Stochastic Dominance via Additional Samples
In this section, we consider purely questions about whether one distribution stochastically dominates another (Sections 3 and 4 outline the connection between these problems and our main result). Recall the following ingredients in our experiments:

are i.i.d. draws from the uniform distribution on , and then relabeled so that .

are i.i.d. draws from the uniform distribution on , and then relabeled so that .

are i.i.d. draws from the uniform distribution on , and then relabeled so that .

is a random draw from the uniform distribution on .
SRev Experiment(): Output .
Big Benchmark Experiment(): Output .
Little Benchmark Experiment(): Let be the largest index such that (if such a exists). If no such exists, output . Otherwise, pick an index uniformly at random from and output .
The main results of this section are as follows:
Theorem 7.
When , stochastically dominates .
Theorem 8.
When , stochastically dominates .
Intuitively, we might expect to stochastically dominate right around . This is because , and . Of course, this observation doesn’t come close to proving stochastic dominance, especially because and aren’t independent. But it does give us an idea of the right ballpark to shoot for. The following proposition will be used in the proof of both theorems.
Proposition 4.
Let . Then for all , . When , this can be improved to .
Before getting into the proof, let’s unpack the role of conditioning on . and are independent, so . On the other hand, and are positively correlated: the lower bound on the range from which is drawn is , which is positively correlated with . So certainly if we could prove the lemma without the conditioning on , the desired proposition would hold. This approach works for (and indeed shows up in our proof as a base case), but without conditioning the conclusion is otherwise false for larger .
Proof.
The proof will proceed by induction on . We begin with the base case, . is easy to reason about: is just the maximum of i.i.d. draws uniformly from . So:
(3) 
Now we turn to . As referenced in the foreword to the proof, for this case the proposition statement holds even without conditioning on . Indeed, observe that without conditioning on , is just the secondhighest of i.i.d. draws uniformly from , and is drawn uniformly from , but conditioned on exceeding . That is, is actually identically distributed to , and is distributed according to the maximum of i.i.d. draws uniformly from . Therefore, when , is identically distributed to , and the conclusion holds. That is:
(4) 
As such, we have proved the base case (in fact, a slightly stronger claim): for all , and when , stochastically dominates . Now we turn to the inductive step, which is significantly more involved. As referenced in the foreword, we must take a different approach for larger , as the desired claim is false if we remove conditioning on .
To this end, we’ll first observe that when , , and when , . So the desired inequalities hold at both endpoints of , and we’d like to reason about . To accomplish this, it will actually be easier to compare to (observe that this simply multiplies both conditional probabilities in our original comparison by ), and consider the derivative with respect to .
So let denote the density of . Then . By Leibniz’ rule, the derivative of this with respect to is:
Let’s first unpack the leftmost term with the following lemma.
Lemma 2.
For all , .
Proof.
Observe that, conditioned on , are (sorted) i.i.d. draws uniformly at random from , and is the highest of them. Put another way, conditioned on is identically distributed to conditioned on . This therefore implies that conditioned on and conditioned on are identically distributed as well. ∎
Now we turn to the rightmost term.
Lemma 3.
For all ,