Smoothed Analysis of Multi-Item Auctions with Correlated Values

Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most one ([Briest et. al 15], [Hart and Nisan 13]), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the ([Briest et. al 15], [Hart and Nisan 13]) construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.


page 1

page 2

page 3

page 4


Robust Revenue Maximization Under Minimal Statistical Information

We study the problem of multi-dimensional revenue maximization when sell...

On Infinite Separations Between Simple and Optimal Mechanisms

We consider a revenue-maximizing seller with k heterogeneous items for s...

On Simple Mechanisms for Dependent Items

We study the problem of selling n heterogeneous items to a single buyer,...

Prophet Inequalities with Linear Correlations and Augmentations

In a classical online decision problem, a decision-maker who is trying t...

Price of Anarchy of Simple Auctions with Interdependent Values

We expand the literature on the price of anarchy (PoA) of simultaneous i...

Simple and Order-optimal Correlated Rounding Schemes for Multi-item E-commerce Order Fulfillment

A fundamental problem faced in e-commerce is – how can we satisfy a mult...

The Limits of an Information Intermediary in Auction Design

We study the limits of an information intermediary in Bayesian auctions....

1 Introduction

How should a revenue-maximizing seller sell heterogeneous goods to interested buyers? This problem has been extensively studied by economists and computer scientists alike, from a variety of perspectives. One major highlight from these works is numerous impossibility results, essentially proving that one cannot hope to find a mechanism that is simple, yet optimal in general settings [5, 22, 23, 16]. A major highlight on the flip side are numerous approximation results, where simple mechanisms are now known to be approximately optimal in quite general (but still structured) settings [10, 24, 11, 21, 2].

Perhaps the clearest example demonstrating the interaction of these two lines of work is the following. Consider a single additive buyer whose values for the two items are drawn jointly from a two-dimensional distribution 111That is, , and the buyer’s value for receiving both items is (and receiving just item is ).. Then there exist correlated distributions such that the revenue-optimal mechanism achieves infinite revenue, yet the revenue of any simple mechanism is at most  [5, 22]. Without getting into details of exactly what “simple” means222Formally, these results show lower bounds on a measure of simplicity called the menu-size complexity., this impossibility result rules out any hope of simple mechanisms that are even approximately optimal for all two-dimensional . Still, a fantastic complementary result shows that if the two item values are drawn independently, selling separately (post a price for each item and let the buyer purchase a single item for , or both for ) guarantees a -approximation [21]. So while the impossibility results for arbitrary are quite strong, compelling positive results still exist under natural assumptions.

This avenue turned out to be quite productive: A long line of work recently culminated in a simple and approximately optimal mechanism for any number of buyers with subadditive valuations over any number of independent items [10, 11, 12, 25, 21, 27, 2, 35, 32, 7, 9, 6]. Modeling assumptions aside, this body of works constitutes a major contribution to the theory of optimal auction design.

The impact of these works notwithstanding, one key direction is left largely unaddressed: even as these works generalized in various directions (arbitrary feasibility constraints, combinatorial valuations, etc.), the “independent items” assumption remained. Even the few works that pose models of limited correlation have some underlying notion of independence (e.g. there are “independent features,” and item values depend linearly on features) [12, 3]. While “independent items” is a perfectly natural assumption (and we have greatly deepened our understanding of mechanism design under it), it was never intended to be ubiquitous in all future works. This is especially true due to the nature of the motivating impossibility result: the witnessing these impossibilities is so carefully crafted (we overview the construction in Section 3) that it is far removed from any “real-world” motivation. That is, the constructions provided in [5, 22] require carefully building by perfectly placing the infinitely many points in its support just so, and even a tiny deviation in the construction would cause the entire argument to collapse. The thoughtful reader may at this point be thinking: if this construction is so fragile that even a tiny deviation breaks it, perhaps a smoothed analysis might prove insightful. Indeed, this is the focus of this paper.

So, what might a “smoothed” distribution look like? Given an arbitrarily correlated distribution , its smoothing

first draws a valuation vector

from , and then randomly perturbs to (where the size of the perturbation is parameterized by some ). This makes sense for the same reason that it makes sense in all other applications of smoothed analysis: these distributions come from somewhere (e.g. past data), inevitably in the presence of tiny noise.

1.1 What Makes a Good Smoothed Model?

Once we’ve decided to do smoothed analysis, we also need to pick a model of random perturbation. We’ll be interested in balancing relevance (e.g. how natural a model is) with transparent analysis (e.g. what insights can we derive?). To help illustrate this point, consider the following perturbation proposal: assume that is supported on ,333The constructions of [5, 22] work subject to this, but one has to replace “infinite gap” with “unbounded gap”. and perturb by adding a uniformly random vector from .

This is indeed a natural starting point. Unfortunately, the model lends itself to a trivial analysis. Specifically, we can first conclude that the optimal revenue for (respectively ) is at most (respectively ). Moreover, for , we can set a price of on the grand bundle (that is, allow the buyer to pay to receive all items or

to receive nothing), which sells with probability at least

. Therefore, the revenue achieved by bundling all items together (henceforth, ) is at least , immediately guaranteeing an -approximation. For the sake of completeness, we include an improved analysis in Appendix E showing that is in fact a -approximation in this model (and that this is nearly tight). So, while technically there’s a “positive result” here, we simply don’t learn much from this exercise. For readers especially interested in the modeling aspect, Appendix F overviews similar issues with alternative models.

From here, we consider two natural modifications. We call the first Rectangle-Shift, which essentially replaces the additive shift of the previous model with a multiplicative shift. In this model, values are perturbed to , where each is drawn independently from . Our first main result proves that in fact the infinite gap persists in this model! That is, for all , there exists a correlated such that the corresponding satisfies but (which implies that any “simple” mechanism achieves finite revenue as well).

Theorem 1.1.

For all , there exists a bivariate distribution such that for its corresponding perturbed distribution in the Rectangle-Shift model, and .

We call our second model Square-Shift, which essentially replaces the scale in the initial model with . In this model, values are perturbed to , where each is drawn independently from . Our second main result shows that (and this is nearly tight, see Appendix C.2).

Theorem 1.2.

In the Square-Shift model, for a single additive buyer and items

Before continuing, we briefly share the distinguishing feature causing Rectangle-Shift to admit an infinite gap, yet Square-Shift to admit finite approximations: the angle by which a buyer’s valuation vector may be perturbed. Observe that in the Rectangle-Shift model, valuation vectors extremely close to an axis remain extremely close to that axis after perturbation. This fact turns out to be crucial in enabling a lower bound construction. On the other hand, valuation vectors in the Square-Shift model are likely to have their angle non-trivially perturbed. This property turns out to be crucial in establishing our approximation guarantee. Of course, both results require much more than this simple observation, but this property is the distinguishing factor causing results in the two models to diverge.

We conclude with two additional technical observations. First, note that in both models (unlike the initial additive model). This means that the entirety of our analysis rests on studying , and determining whether the perturbations guarantee that it’s finite. Second, note that in both models stochastically dominates . Therefore, when we prove in Theorem 1.2 that is bounded, this is not because we perturbed the buyer into valuing the items less, but really because this perturbation negates whatever bizarre properties of the original led to . This serves as another example of revenue non-monotonicity [23].

Brief Discussion of Models. The refuted additive-noise model perhaps seems most natural from a mathematical perspective (indeed, it is the most similar to models used in prior smoothed analysis). From an economic perspective, this is somewhat natural as well (each consumer in the population makes errors on the scale of, e.g., $10, while no consumer values any item more than $1 million). The “Rectangle-Shift” model is also uncontroversially natural from both perspectives: a consumer’s value for item is inaccurately measured proportional to her value for item

. The “Square-Shift” model requires a touch more thought. From a mathematical perspective, it is simply replacing the universal $1 million scale for all consumers in the population with a scale proportional to that consumer’s value for the items. From an economic perspective, it may initially seem uncompelling that a high value for one item may cause larger error in estimating the value of the other item. However, numerous works in the behavioral economics literature suggest that bidders indeed value items differently in the presence or absence of more/fewer valuable items 

[1, 30, 26, 28]. For example, in an experiment of [30], the authors observed that a buyer’s willingness to pay for a cheap item (a CD) was (statistically significantly) higher when a second item (a sweater) was being sold at 80$ than when it was being sold at 10$.

1.2 Extensions: Many Buyers and Many Items

After resolving the single-buyer, two-item case, the natural question is whether a similar analysis extends to multiple buyers or multiple items. For the Rectangle-Shift model, the impossibility results extend to the multi-item case, so we focus on the Square-Shift model. For buyers and two items, we consider an arbitrarily correlated -dimensional distribution (denoting buyers’ values for two items). Our perturbation then draws independently and uniformly from for all buyers , and , and maps to . In other words, while the buyers’ values are correlated, the perturbations are independent (and only depend on that buyer’s values).

In this model, we’re again able to prove that (and this is again nearly tight). Here, denotes the revenue of the optimal dominant-strategy truthful mechanism444That is, it is in each buyer’s interest to report their true value, no matter the behavior of the other buyers. Contrast this with Bayesian truthful, where it is in each buyer’s interest to report their true value as long as the other buyers do so as well. Note that we cannot hope to replace with here due to [18] - see Section 2 for further discussion., and denotes the revenue achievable by running Ronen’s simple single-item auction [31] for the grand bundle, i.e. treat the grand bundle as a single-item auction and run Ronen’s auction. See Section 2 for the definition of Ronen’s auction — it is a second-price auction with reserve, but the reserve depends on the other buyers’ bids. If happens to be independent across buyers, but still correlated across items, we further get , where is the revenue of the optimal BIC mechanism. Note that the guarantee does not depend on .

Finally, we consider the single-buyer, multi-item case. The extension of the Square-Shift model is the obvious one: perturb each value independently by . Here, we show that our techniques extend, but give . Furthermore, the exponential dependence on is unavoidable: a simple counterexample provides a such that for , (Appendix C.3). Our analysis extends to the -item -buyer case, again obtaining an approximation guarantee exponential in .

Theorem 1.3.

In the Square-Shift model, for a single additive buyer and items

Theorem 1.4.

For the additive buyer, item, Square-Shift model,

where for correlated buyers, and for independent buyers.

Our analysis further extends to settings beyond those where the buyers’ valuation is additive across items. For example, all of our positive results hold for the general class of valuations which are additive subject to downwards-closed constraints, at the cost of an additional factor of ; we discuss this in detail in Section 5.

1.3 Related Work and Roadmap

The present paper is the first to explore smoothed analysis in auction design. Other hybrid worst-case/average-case guarantees have been studied, for instance, in the digital goods setting (e.g. [20, 19, 13]), where at a high level, auctions compete instance-by-instance against any auction that could potentially be optimal in the average-case. Work of Carroll [8] on robust mechanism design is also thematically related, and argues that simple mechanisms are optimal if the auctioneer wishes to obtain a worst-case guarantee against all value distributions consistent with given marginals (but neither directly nor indirectly addresses the concepts in the constructions of [5, 22]).

There is a growing body of related literature on multi-item auction design, largely proving impossibility results for arbitrarily correlated distributions (e.g. [5, 22]), or approximation results for distributions with “independent items” (e.g. [10, 11, 12, 25, 21, 27, 2, 35, 32, 7, 9, 6]). Limited work exists on models with limited correlation, but such models make use of the same tools, essentially replacing “independent items” with “independent features” and items that are linear combinations of features [12, 3]. The present paper is the first tractable multi-item model of limited correlation that doesn’t rely on these tools.

Smoothed analysis is a popular framework for analyzing algorithms on “real-world worst-case” inputs. Smoothed analysis most commonly refers to smoothed computational complexity (e.g. an algorithm might run in exponential time in the worst case, but in polynomial time if the worst-case inputs are randomly perturbed), and has been an extremely influential paradigm [34]. For instance, the Simplex Method for solving LPs is known to take exponential time in the worst case, but has smoothed-polynomial computational complexity [33]. More similar in spirit to the present paper is prior work that considers the smoothed competitive ratio of online algorithms [4] or smoothed approximation ratio of mechanisms [17]. The motivation for considering smoothed analysis in these works is, of course, similar to ours, but there is no similarity in techniques: the process of proving smoothed guarantees is a domain-specific process.

Roadmap: Section 2 poses our model and some preliminaries. Section 3 presents our lower bound for the Rectangle-Shift model. In Section 4 we present our results in the Square-Shift model, including its extension to multiple buyers and multiple items. Appendix A contains a detailed summary of our results in the Square-Shift model, as well as comparisons with known results for independent items. Extensions beyond additive buyers are presented in Section 5. Nearly tight bounds for the additive noise model can be found in Appendix E.

2 Preliminaries

Throughout this paper we study auctions with buyers who are bidding for items. Each buyer draws its valuation vector for the items from an -dimensional distribution with density , and we refer to as the joint -dimensional distribution. The buyer has value for the -th item, with marginal distribution . Our results do not assume any kind of independence between buyers or items. That is, can be correlated with (same buyer different item), as well as (different buyer and possibly different item). For now, all buyers considered are additive, quasi-linear, and risk-neutral (or expected utility maximizers). See Section 5 for extensions to non-additive valuation functions.

Mechanisms and Benchmarks.

As with many results in approximate mechanism design we compare the optimal revenue attainable to benchmarks that result from simple mechanisms. For a single buyer, let be the optimal revenue that can be extracted by a truthful mechanism when valuations are sampled according to . Let be the optimal revenue that can be attained by selling the grand bundle as a single item — that is to say, the mechanism posts a single price and allocates all the items if the bidder is willing to pay that price, otherwise it charges nothing and allocates nothing. For multiple buyers, let denote the optimal revenue that can be extracted by a BIC mechanism (see Appendix B for a formal definition for correlated buyers), denote the optimal revenue that can be extracted by a DSIC mechanism, and denote the revenue extracted by Ronen’s single-item auction (treating the grand bundle as a single item). Ronen’s auction will always award the item to the highest bidder (or no one), and the price charged is the maximum of the second-highest bid and a per-buyer reserve (that depends on the other buyers’ bids). Ronen shows that for any single-item setting, Fu et al. [18] show that does not generally guarantee any finite approximation to , but that under some assumptions it attains a 5-approximation.555All of these claims assume that the mechanism is required to be ex-post individually rational, due to seminal work of Cremer and McLean [14]: see Appendix B for further discussion.


We consider two different smoothing models and refer to the resulting distributions as . The magnitude of the perturbation depends on a parameter . We write for the density of . is the set of points that could map to under a certain model with parameter ; the model will always be clear from the context and is omitted from notation. We also drop when clear from the context, and write for the set of points that maps to. The models we consider are (see Figures 0(a)0(b) in Appendix A for illustrations):

  • [leftmargin=*]

  • Rectangle-Shift: buyer ’s value is replaced by with where each is sampled independently from . Intuitively, this spreads the mass uniformly on the -parallelepiped with side lengths and as its smallest vertex.

  • Square-Shift: buyer ’s value is replaced by with where each is sampled independently from . Intuitively, this spreads the mass uniformly on the -dimensional cube with side length and as its smallest vertex.

As mentioned earlier, it is known that even for the single buyer, two items case, the gap between and is unbounded when there is correlation between the items [5, 22]. In this paper we compare and as a function of .

3 Persistence of Infinite Gap in the Rectangle-Shift Model

In this section we first overview the key ideas from [5, 22], and then present our construction witnessing an infinite gap in the Rectangle-Shift model. Proofs missing from this section can be found in Appendix C.1. Recall this section’s main result: See 1.1

The key insight behind the [5, 22] construction is the following: assume that you have points in the positive orthant of the unit hypercube, , such that for all . If the valuation vectors in the support of our distribution are exactly , we can design a mechanism such that when the bidder reports a valuation of , the mechanism offers a randomized allocation (or lottery) of for price . When the buyer has valuation vector , they get utility for the lottery designed for them, which is greater than the utility of reporting some other valuation , . Therefore, their utility for purchasing the lottery tailored for them exceeds their value for any other lottery (and therefore their utility as well). This guarantees that the buyer will always purchase the lottery designed for them. Observe that simply selling the grand bundle at price achieves just as much revenue, so this idea is just the first building block.

From here, [5, 22] modify this construction to increase Rev while keeping BRev small. The second insight is that if the valuation vectors in the support are actually , then we can offer the lottery for price and the buyer will still always prefer their tailored lottery to any other one, because their utility will be , which is their value for the lottery . Therefore, if the buyer has value with probability , Rev can be quite large, while BRev will remain small. Of course, there are still some details left to work out, such as exactly how to analyze Rev and BRev, and we’ve also simplified the key ideas at the cost of technical accuracy, but these are the main ideas we’ll borrow from previous work: the point is that these constructions are packing points inside the unit hypercube with small pairwise dot products. Our construction is a “perturbation robust” version of that of [22], as we have to show not only that will prefer the intended option, but that any smoothing of will prefer it as well.

The intuition of the preceding paragraphs is captured by Lemma 1 below. For a set of points let . That is, roughly captures the minimum angle between any smoothing of and any , . First, we show that given any such set of points, there exists a distribution such that is at most a constant, but there exists an auction that extracts revenue . Roughly speaking, has a type for every point , such that all post-perturbation types prefer randomized allocation at price over any other (allocation, price) pair . The proof of Lemma 1 is similar to Proposition 9 of [22]666This is clearer when compared to Proposition  in the arXiv version of [22] uploaded April 22nd, 2013. after adjusting for our perturbations.

Lemma 1.

In the Rectangle-Shift model, for all , and sequences of points , there exists a bivariate distribution , such that for its corresponding perturbed distribution it holds that and .

Once we have Lemma 1, the crux of the proof becomes constructing a set of points such that diverges. We indeed follow an outline similar to the construction of [22], but the requirement that any smoothing of has sufficiently large angle with all (versus just itself) poses additional challenges. We include a fairly detailed overview below, with proofs of the key steps.

Lemma 2.

For all , there exist a countably infinite set of points , such that diverges.


The points will lie on a sequence of spherical shells restricted to the non-negative orthant, with increasing radii. Specifically, shell will contain a consecutive subsequence of points, all with the same magnitude, . We first make sure that the shells are sufficiently far apart from each other (i.e. that is sufficiently large) so that the distance between on shell is large when compared to any on shell . Specifically, we define , where , and . Note crucially that is finite (by the integral test), and therefore is well-defined and bounded above by for all .

Now for each shell , we’ll place points with magnitude starting from angle and going down to angle such that within each shell, the points are sufficiently far apart from each other. Specifically, we’ll place points so that the angles form a geometric progression: the point on shell will have angle , and we’ll stop placing points on shell once the angle drops below (we’ll determine later exactly what the maximum is as a function of ).

From here we have two tasks of analysis. First, we must figure out what is for some point on shell . Second, we must figure out how many points we can pack in each shell with the above construction. We begin by analyzing the gap for a point on shell with the following lemma.

Lemma 3.

Let be a point on shell . Then (recall ).


In order to evaluate for a point in shell , we’ll compare it to all points on a shell . Note that technically we don’t need to compare to every point on shell , but the analysis is more straightforward this way. For every point we also need to find the point such that is minimized.

First, consider and the set of points that have the same angle as but are on earlier shells. Then, certainly the point in that minimizes is the one on shell . Furthermore, the worst possible perturbation for that point is exactly , since has both coordinates greater than . Therefore, the gap between these points is (the final relation is because for all ). Therefore, when comparing to any with the same angle as , the gap is indeed .

Now, consider some with a different angle. By construction, for any angle such that there exists a point on a shell with angle , there also exists a point on shell with angle . Therefore, such a inducing the smallest gap is clearly on shell . Moreover, it will either be the point immediately clockwise of , or immediately counterclockwise (so as the maximize the dot product and minimize the gap).

Let be the angle of on the -th shell, be the angle of the we’re comparing to (also on the -th shell), and the angle of a point . Our plan is to show first that must be far from (Claim 1), that must be close to (Claim 2), and finally that conditioned on these facts, must be large (Claim 3).

Claim 1.


Claim 2.


Claim 3.

Let and both lie on shell , and . Then .

Given Claim 3, we simply need to observe that most of these terms are in fact independent of (up to constant factors). Specifically, is obviously independent of . Moreover, for all . So Claim 3 indeed concludes that .

This completes the proof of Lemma 3: we have shown that the gap to any point on a lower shell is , and the gap to any point on the same shell is . The final equality is simply because our construction stops placing points on shell once the angle drops below .∎

Now that we’ve computed for any point on shell , we need to compute the number of points on shell and then we can analyze .

Lemma 4.

The number of points on shell is .


Here, we simply need to figure out the largest value of such that the angle . So we seek the maximum such that . Taking logs of both sides and rearranging yields (note that ): . ∎

We can now complete the proof of Lemma 2. We’ve just shown that our construction has points on shell (Lemma 4), and each such point has gap at least (Lemma 3). This means that:

diverges by the integral test, meaning that also diverges. ∎

Theorem 1.1 follows from combining Lemma 1 and Lemma 2. A technical takeaway from this section is that the ability to construct bad examples is heavily tied to the ability to pack points in a sphere “sufficiently far away” from each other. Without any smoothing, it’s possible to pack infinitely many such points [22], because the points claim none of the region around them. With rectangle smoothing, the points now claim some of the region around them, and constructions become trickier, but still exist (note, in particular, that because our choice of angles form a geometric progression, our sequence of points is heavily concentrated near the x-axis, where the angle is barely perturbed). On the other hand, in the Square-Shift model, the points now claim a large region around them and it becomes unclear how to pack so many points. Of course, this is just intuition for why these specific constructions no longer work; the following sections show that the Square-Shift model indeed allows for smoothed-finite approximations.

4 Square-Shift Upper Bounds

In this section we present our positive results for the Square-Shift model. We start with a complete proof in a toy “Angle-Shift” model. This will help us highlight some of the key insights without the technical barriers. In this model a buyer’s value is perturbed to a point , where is drawn from . In other words, we output a vector with the same length as , i.e.

, whose angle is uniformly distributed in the interval

, where is the angle of (See Figure 0(c) in Appendix A).

In Section 4.2 we restate all our results for the Square-Shift model. The proof for a single buyer and two items can be found in Section 4.3. We present the proof for multiple buyers and two items in Section 4.4. We include the single buyer, multi-item case in Section 4.5, and the multi-buyer, multi-item case (with arbitrary correlation across everything) in Section 4.6. Throughout this section we assume familiarity with polar coordinates. For a brief review, see Appendix D.

4.1 Angle-Shift Upper Bounds

In this section we describe our approach for the Angle-Shift model. We show that for this perturbation model, bundling recovers a fraction of the optimal revenue for a single buyer interested in two correlated items. This proof highlights some key insights behind Theorem 1.2 without most of the technical barriers.

Theorem 4.1.

For the Two-Dimensional Angle-Shift,


We break the proof into two big steps. The first step will be almost identical for all our positive results. In the second step we use model-specific facts.

Step one. We begin by writing the revenue of in polar coordinates. Below, denotes the density of in polar coordinates, and denotes the density of in polar coordinates. Recall that is not the same as . We also use the notation to denote , and to denote (note that is the density of a one-dimensional distribution, which is the continuous analog of conditioning on ). denotes the payment of a buyer with values in the optimal mechanism.

Let be the single parameter distribution from which we first sample according to the density function , and then output the vector . Then, notice that the inner most integral is the expected payments according to for . Observe also that if our optimal mechanism is truthful on the entire domain , it is certainly also truthful on the domain restricted to . Therefore, this expected payment is upper bounded by , the revenue of the optimal truthful mechanism for the same distribution. Furthermore, since this is a single parameter distribution, its revenue is equal to the revenue of a posted price auction (with some reserve [29].


In going from line one to two above, we are observing that integrates the density of over all with . The final step is simply rearranging for convenience later. This concludes step one, which is simply to upper bound the optimal revenue by an integral over one-dimensional revenues. Step Two follows, which appeals to the model at hand to unpack . Step two: model specific analysis. For the Angle-Shift model we have that

Above, the equality is simply because each when sampled from sends a fraction of its density to . The first inequality is just observing that the fraction of ’s density that goes to is at most (it is exactly unless is near an axis). The final inequality just observes that the third term is integrating over a proper subset of possible , so the integral is certainly upper bounded by integrating the entire region from to (similarly to above, we let ). This concludes step two, which is simply to use the specifics of the model to upper bound the density of the smoothed distribution by that of the original distribution.

We combine steps one and two directly to continue our derivation:

This completes the proof for the Angle-Shift model. Note that above the first inequality simply combines steps one and two. The second observes that integrating the density of a distribution yields (one minus) its CDF. The next line observes that the described event occurs with equal probability for samples from and (this is the step where angle-shifting saves some technical work over square-shifting). The fourth is basic geometry. The final inequality notes that the expression we are integrating over is the revenue of a posted-price mechanism for the bundle (specifically, with reserve ) and in particular can be no better than itself.

Let us now highlight the key step of the proof where we make use of the smoothed model. Step one applies for any distribution (even unsmoothed), and is in general very loose. In particular, it could be as high as the full welfare for some distributions. Smoothing gets us mileage in the first half of step two where we transition from terms that depend on to terms that depend on . Mathematically, we capitalize on the following: if there were no angle-shifting, then we would simply have had instead of the integral with respect to . This is important because integrals over density functions are probabilities, but density functions themselves are not probabilities! That is, if we concentrate on the first part of step two, this is claiming that an integral of a non-negative function over a smaller range is upper bounded by the integral of the same function over a larger range. Without smoothing, we would instead just have a single point of that function, which can’t generically be upper bounded by an integral over any region.

4.2 Square-Shift: Our Results

We remind the reader of our results for the Square-Shift model below. Section 4.1 captures the intuition for one core step which appears in each of the proofs, but in order to prove our main positive results we still need to overcome a number of technical obstacles.

See 1.2

Theorem 4.2.

In the Square-Shift model, for additive buyers and items:

See 1.3

See 1.4

4.3 Square-Shift: One buyer, Two Items.

Proof of Theorem 1.2.

Our first few steps are identical to the Angle-Shift model. Recall that the first step on the Angle-Shift model was simply to upper bound the optimal multi-dimensional revenue by that of multiple single-dimensional distributions. We then write the revenue of a single-dimensional distribution as the revenue of a posted-price.


We now want to proceed with the analog of the second step: upper bound in terms of the original distribution. We start by moving from polar coordinates to cartesian coordinates, using the transformation . Then, we can connect , the density of , to , the density of , using the following:


where is an indicator for the event that belongs in . The inequality follows from the fact that for to be 1 we must have that .

This is starting to look familiar to the expression we had at the end of step two for Angle-Shift. The goal of the next few steps is to replace the innermost integral by the integral of the original density on some controlled region and get rid of the indicator function. At this end of this process, we hope to get a double integral integral (with respect to and ) that we can interpret as a probability. Our first step is to apply Fubini’s Theorem to swap the order of integration:

Now we argue about values of and where the indicator is non-zero.

Claim 4.

, if then .


Recall that means that that could map to a point with length and angle . The length of a point that could map to via our perturbing process must be at least . Furthermore, this length is at most . Combining we get . ∎

We apply Claim 4 to change the limit of integration:

We can now upper bound the value of the inner most integral.

Claim 5.

For the current Square-Shift model, for any ,


For a fixed angle and point , in order to find out where the indicator is non-zero we need to find the set of points on the line with slope that intersect the set of points to which can map to. The line will first intersect the square defined by at some point with length (if it ever intersects), and leave the square at some point with length . The ratio between and is at most :

The claim follows from integrating the simplified upper bound. ∎

Once the indicator has been dealt with, the remaining upper bound looks similar to that at the end of step two in the Angle-Shift model. The next couple of lines interpret the double integral as the probability that the buyer drew a sample whose bundle value was above the reserve price of that distribution. This is the moral analog of final step in the Angle-Shift model. Applying Claim 5 we have