1 Introduction
A core field of study within Algorithmic Mechanism Design is that of the design of selling mechanisms, with one of the most fundamental questions being that of revenuemaximization by a single seller, even when facing only a single buyer. The standard setting for this question is the Bayesian setting, where the seller knows a prior distribution from which the values of the buyer for the various items are drawn, and aims to maximize her revenue in expectation over this prior distribution. When the seller has only a single item for sale, the optimal mechanism in such setting turns out to be a simple pricing mechanism, as established by Myerson [32] and Riley and Zeckhauser [33]. But for multiple items, even for simple valuation models such as additive or unitdemand valuations, and even assuming that the buyer’s values are independent across items, revenueoptimal mechanisms can be quite complex [20], hard to compute [16; 19], and exhibit unintuitive behavior such as nonmonotonicity in the valuation distributions [28], with the general revenuemaximizing solution even in these settings continuing to elude a complete characterization to date. In such settings, the search for simple mechanisms, and in particular for pricing mechanisms (i.e., deterministic mechanisms, which price items and/or bundles of items) that give good revenue guarantees, has therefore spawned many important results [14; 13; 2; 26].
As one might expect, there are grave impossibility results [27; 6] for the Bayesian setting when the prior distribution (that is perfectly known to the seller) over the valuations of the various items exhibits correlations between these valuations (rather than the items being independently distributed). Nonetheless, Carroll [11] has asked whether the situation may become more hopeful under a “partially known underlying distribution” scenario in which the seller is only given the marginal distribution of each valuation, and wishes to maximize her guaranteed expected revenue over any possible correlated valuation distribution with the given marginals.
In his pioneering paper, among other contributions, Carroll [11] considers the scenario of a single seller with a number of items to sell to a single buyer with an additive valuation, where the seller knows the distribution of the buyer’s valuation for each good separately. Remarkably, the mechanism that provides the highest “worst case” (expected) revenue guarantee across all such correlations is an exceedingly simple pricing mechanism: it simply prices each item separately according to its optimal takeitorleaveit price à la Myerson / Riley and Zeckhauser. Interestingly, since this pricing mechanism prices only single items and not bundles of items, it has the appealing property that its expected revenue is independent of the correlation structure. One might therefore take an intuitive message that in the absence of knowledge about such correlations, one should opt for a mechanism whose revenue is agnostic of these correlations. This message is further echoed in recent extensions to budgeted buyers [25] and optimal contract design^{1}^{1}1
In this robust contract design problem, the hidden information is not correlations but rather the higher moments of the reward distributions for the principle
[22], however the message remains: the (expected) revenue of the (worstcase) optimal mechanism/contract is agnostic of the information that is hidden from the mechanism/contract designer (even though for other, nonoptimal, mechanisms/contracts, this information is needed to compute the revenue). [22]. As such, one might naturally wonder whether such a principle of agnosticism holds more generally, even if only for approximate revenuemaximization; or, failing that, whether one can always find simple pricing mechanisms (correlationagnostic or not) that even only approximate the (worstcase) optimal revenue.In this paper we study revenuemaximizing pricing in a correlationrobust setting where a seller with multiple items faces a single unit demand buyer, and in particular consider the above question through the lens of this setting. The analysis of this setting was posed as an open problem by Gravin and Lu [25], who in particular also explicitly posed the question of the tractability of finding a solution. We first ask: does the optimal correlationrobust mechanism take the form of a correlationagnostic pricing mechanism that can be computed efficiently? (As is often done, we use “can be computed efficiently” to formalize the amorphic “simple” from the above discussion.) As it turns out, the answer is no; we present examples in which even the optimal choice of item prices^{2}^{2}2A pricing mechanism (any deterministic mechanism) for a unitdemand buyer without loss of generality simply sets a price for each item (being the lowest price of any bundle containing the item), and need not offer any bundles since a unitdemand buyer has no value for more than one item. (a lower benchmark than the optimal mechanism) leads to an unboundedly higher revenue than any correlationagnostic choice of prices as grows large. This negative answer gives rise to a new challenge: identifying the revenue guarantee given the (worstcase) optimal pricing. Our first main result addresses this challenge and shows that given the optimal pricing, and in fact given any pricing, the revenue guarantee of that pricing can be efficiently calculated:
Theorem 1.1 (See Theorem 3.5).
Given discrete (marginal) distributions from which a unitdemand buyer’s valuations of respective items are drawn, and given respective prices for these items, the correlation among the given distributions that gives the lowest (expected) revenue from the buyer, as well as that revenue itself, can be computed in polynomial time.
In other words, the highdimensional problem of finding a revenueminimizing correlation given prices is tractably solvable. What about the lowerdimensional problem of finding a pricing that well approximates the worstcase optimal revenue?
A major hurdle in finding highrevenue pricings for a unitdemand buyer, a hurdle that does not arise in the additive case, is that of cannibalization, whereby one item that is offered for sale cannibalizes from the revenue of another item.^{3}^{3}3This cannibalization issue is common to the literature on assortment planning, where typically the prices are fixed and the decisionmaker must choose which items to make available; see [29] for a survey. Consider for example two items, one priced at a low price, say, $1, and the other at a high price, say, $1,000,000. Say that the buyer has realized values $1.5 for the first item and $1,000,000.25 for the second item. Such a buyer would opt for buying the first item, resulting in a revenue of only $1, as the buyer’s utility from that item (at that price) is slightly higher. Since the buyer has unit demand, she would therefore not buy the second item. It is not hard to see that for this particular value realization, pricing the first item at infinity (i.e., not offering it for sale at all) leads to higher revenue. When the valuations are drawn from an underlying distribution, the extent to which cannibalization affects the seller’s revenue is of course intimately connected to the correlation between the values of different items.
In the Bayesian setting, if item valuations are independently drawn, then while cannibalization manifests to some extent, it turns out that simple pricings can still achieve a constant approximation to the optimal revenue for any number of items [14; 13]. But correlations between item values can potentially amplify cannibalization. For example, one might correlate values so that the buyer often has just slightly higher utility from lowpriced items than from highpriced items. Such a “bad” correlation could depend crucially on the specific choice of item prices, though: they determine which values are abovetheprice, and the utility given by each of these values. Therefore, avoiding excessive cannibalization is in some sense even more challenging in the correlationrobust setting, as in this setting the worstcase correlation is effectively tailored to maximally cannibalize the revenue of the chosen prices (rather than given in advance, which gives the seller a chance to price in a way that mitigates the cannibalization caused by a specific correlation).
Keeping cannibalization under control, a challenge that is completely absent from the additive setting, is indeed the main challenge in our unitdemand setting. As we show, finding prices that best overcome this challenge is as hard as getting a relatively good approximation to the notoriously hard problem of finding the maximal independent set in a graph. Hence, it is not only hard to find prices that “thread the needle” by fighting cannibalization “just right” to optimize the (worstcase) revenue among all prices, but it is in fact NPhard to find prices that even coarsely approximate the revenue among all prices to any reasonable extent:
Theorem 1.2 (See Theorem 4.1).
The maxmin robust pricing problem is NPhard: given (marginal) distributions , each described using at most bits, from which a unitdemand buyer’s valuations of respective items are drawn, compute respective prices for these items that maximize, among all possible choices of prices, the guaranteed (expected) revenue from the buyer over any correlation among the given distributions. Furthermore, it is NPhard to find prices that even approximate this guaranteed revenue up to a factor of for any .
This theorem strongly indicates that a “clean” characterization of (approximately) optimal robust prices is unlikely to exist. The theorem also immediately implies the same lower bound also for finding prices that approximate the more ambitious benchmark of the (worstcase) optimal revenue from any (not necessarily pricing) mechanism. To the best of our knowledge, this is the first hardness result in the correlationrobust framework.
Given this main negative result, in the last part of our paper we ask whether certain standard assumptions on distributions, when satisfied by the marginals, can mitigate this impossibility, and possibly mitigate also some other undesirable phenomena that we identify. Our third and final main result gives a positive answer to this question, showing that if all of the marginal distributions exhibit monotone hazard rate (MHR), a standard tail condition in mechanism design, then the worstcase optimal revenue (from any mechanism) can be approximated up to a factor of using a simple pricing with many desirable properties; it concisely depends on the marginals only through one singledimensional statistic of each, its revenue is agnostic of the unknown correlations, and its revenue is monotone in the given marginals:
Theorem 1.3 (See Theorem 5.3).
Given MHR (marginal) distributions from which a unitdemand buyer’s valuations of respective items are drawn, let be such that the median of is (weakly) higher than the medians of all other . Setting a price of for item and setting a price of infinity for every other item maximizes up to a factor of at most (across all mechanisms whatsoever) the guaranteed (expected) revenue from the buyer over any correlation among the given distributions.
Unfortunately, if the MHR condition is relaxed, we show that many hardships arise already for regular distributions (the “one notch weaker” standard tail condition in mechanism design), such as requiring linearly many different item prices (and hence requiring nonmonotone mechanisms), and nontrivial dependence on the marginals. This motivates the main question that we leave open:
Open Question 1.4.
Is there a computationally efficient algorithm that given regular distributions from which a unitdemand buyer’s valuations of respective items are drawn, finds prices that maximize up to a constant factor (even only across all possible choices of prices) the guaranteed (expected) revenue from the buyer over any correlation among the given distributions?
We conclude this paper by presenting some observations and examples, which may be useful toward this open question.
1.1 Comparison with Bayesian Optimal Mechanisms under Independence
The notion of correlationrobust mechanism design stands in contrast to the literature that assumes that buyer values are independent across items. As such, it is worthwhile to draw comparisons across the two settings.
Given the positive results of Carroll [11] and of Gravin and Lu [25], one may have indeed wondered, for a given multiitem setting (or at least for a canonical simple multiitem setting) and given marginals, whether correlationrobust mechanism design is always “easier” in some sense than Bayesian mechanism design with the assumption of independence of the marginals. Our results give a negative answer to this hope, in a precise formal sense. Consider the best approximation factor to the optimal revenue (of any mechanism) that is obtainable by an efficiently computable pricing mechanism. In the Bayesian independent model, for both the additive and the unitdemand settings (two special cases of gross substitutes valuations), this factor is a constant that is strictly greater (worse) than [14; 13; 2; 37], and this extends even to subadditive settings [35], a strict superclass of gross substitutes. In the correlationrobust model, for the additive setting this factor is simply [11] (i.e., optimal—an improvement compared to the Bayesian independent model). Yet, we show, in sharp contrast, that already for the unitdemand setting this factor is not only worse than in the independent model, but is actually unboundedly large as grows large, even when compared to the weaker benchmark of the optimal revenue of a pricing mechanism.
In this vein, it is also instructive to consider a recent paper by Bei et al. [4]. In their paper, they study correlationrobust pricing in a singleitem setting with multiple buyers, and give a pricing mechanism that maximizes the worstcase revenue up to a constant.^{4}^{4}4To the best of our knowledge, Bei et al. are the first to study approximate revenue maximization in a correlationrobust setting. Given the wellknown connections, in the Bayesian setting, between the singleitem multibuyer case and the multiitem singlebuyerwithunitdemand case [13], one may be surprised by the stark contrast between the positive result of Bei et al. and our negative one. This contrast in fact highlights a key difference between their analysis and ours: the mechanism used by Bei at al. in the singleitem multibuyer setting offers the item to different buyers for different prices, and does so by offering it first to the buyer for whom the set price is highest, then to the buyer for whom the set price is the secondhighest, etc. That is, in their (singleitem multibuyer) setting the mechanism can force the item to be sold for the highest price that is below the corresponding value, while in our (multiitem singlebuyer) setting, since there is a single buyer making the decision, we have no escape from the price to be paid being the one that is farthest below the corresponding value (i.e., generating the highest utility
), allowing lowerpriced items to cannibalize, as discussed above, from the sale probability of higherpriced items. Indeed, to the best of our knowledge, our study of the unitdemand case is the first in a correlationrobust model where there is no solution that can be expressed as a composition of solutions to singleitem auction setting. Approaching our research questions therefore required the development of new technical approaches to the correlationrobust model.
Taken in concert, the above observations highlight correlationrobust revenue maximization as a framework for which intuition from the Bayesian setting may fail, and for which completely new separate intuition may have to be developed.
1.2 Other Related Work
Gravin and Lu [25] give an alternative proof to Carroll’s result, and furthermore extend their study to solve the more general scenario of additive valuations with a buyer budget (the buyer’s fixed budget is known to the seller, but the correlation between valuation distributions is again adversarially chosen), for which, as noted above, they show that Carroll’s main message of the optimal mechanism being simple (and easy to compute) and its revenue being agnostic of the correlation still holds. Driven by similar motivation of robust revenue maximization, however in a different setting of contract (rather than auction) design, Duetting et al. [22] study the worstcase optimal contract in a principalagent setting where only the expected rewards from the agent’s actions to the principal are known (rather than the full reward distributions). Their identification of linear contracts as optimal in this sense once again features the same properties of simplicity, computational efficiency, and agnosticism to the hidden information.^{5}^{5}5Earlier work of [10] gives a different sense in which linear contracts are maxmin optimal, one that does not share this property of being agnostic to the unknown component.
These works on robust mechanism design fit within a broader research agenda of robust optimization, which has been studied in Operations Research tracing back to the classic paper of Scarf [36]. This approach has been applied to mechanism design, among other domains [3]. Within computer science, robust design ties in to “beyond worstcase” analysis approaches [34], and in particular to semirandom models [5]. From this perspective, we study a semirandom model in which one aspect of the model (the item values) is randomly drawn, whereas another (the correlation among values) is adversarially chosen. Hybrid models such as these are gaining traction in recent years, in part due to their power to explain why certain algorithmic methods work better than expected.
Most of the work on mechanism design for a unitdemand buyer has been in the standard Bayesian model. If there is only a single item for sale, Myerson [32] characterizes the revenueoptimal mechanism, which for a single buyer (see also [33]) amounts to simply setting the monopoly price for the item (the price that maximizes the expected revenue given the item’s value distribution).
For the case of a single unitdemand buyer with a product distribution over item values, Chen et al. [16] show that computing a revenueoptimal pricing is NPhard, even for identical distributions with support size (but can be solved in polynomial time for distributions of support size ). Chawla et al. [13, 14] give a constant factor approximation for the optimal pricing, which applies also with respect to the optimal randomized mechanism (i.e., pricing lotteries) by the observation that pricing lotteries cannot increase revenue by more than a factor of in the case of product distributions [15]. Cai and Daskalakis [8, 9] give an additive PTAS for the case of bounded distributions, and also derive structural properties of the optimal solution for special cases. Among other properties, they show that if the buyer’s values are independently distributed according to MHR distributions, then constant approximation can be obtained with a single price (which can be efficiently computed). Moreover, if values are also identically distributed, then a single price yields nearoptimal revenue.
For the case of correlated distributions, Briest and Krysta [6] show that the optimal pricing problem does not even admit a polynomialtime constant approximation. It has been also shown that, unlike product distributions, pricing lotteries over items can increase the revenue (beyond item pricing) by a factor of [37; 7].
Finally, another popular fairly recent line of research that builds upon the Bayesian setting by having the seller only have partial information about the underlying distribution (but keeping the optimal auction for the underlying distribution as the benchmark) is that of revenue maximization from samples, where the seller is shown samples from the underlying distribution rather than the whole distribution [18]. Within this context, for pricing for a unitdemand buyer see [31], and for revenue maximization for a unitdemand buyer see [24].
2 Preliminaries
2.1 Model
The main player in our model is a seller, who has items for sale to a single unitdemand buyer. The buyer has a valuation profile where denotes her value for item . The seller can set a pricing
, i.e., a vector of item prices
. Given a pricing , the buyer purchases a single item that maximizes her (quasilinear) utility ,^{6}^{6}6Tiebreaking is discussed below. or nothing if her utility from purchasing any item would be negative.Problem instance.
An instance of our model consists of marginal distributions for the item values . The distributions are over supports known as value spaces. If is bounded, then we denote its maximum value by , and we denote its minimum value by . Importantly, item values can be correlated, as long as for every item the buyer’s value is marginally distributed according to .^{7}^{7}7Being marginally distributed according to means the following: let be the probability that if we sample a valuation profile , the value for item is at most ; then for every . Notice that the correlation is not part of the instance but rather will be adversarially chosen.
Given a value
, its quantile
is (so low quantiles correspond to low values and vice versa). For every quantile we define its value to be (notice that if is strictly increasing then it is invertible, the inverse function is well defined, and is equal to the inverse ).Compatible distributions.
For a given instance with marginals , a compatible distribution is a joint distribution over valuation profiles such that the marginals of for the individual item values coincide with . A natural class of compatible distributions is that of perfect couplings. In a perfect coupling, the value of any one of the items determines the values of all others. More formally, a distribution is a perfect coupling if for every item there exists a coupling function, a measurepreserving bijection , and a valuation profile is drawn from by randomly drawing , and taking the value of each item to be where .
One particular perfect coupling that plays a role in our results is the following.
Definition 2.1.
The comonotonic distribution is the perfect coupling defined by for every .
The comonotonic distribution appears in the work of [11] on correlationrobust pricing for an additive buyer.^{8}^{8}8In that setting, as is shown there, when the marginals are regular distributions, the worst correlation from the seller’s perspective (in a sense formalized below) is the comonotonic distribution. Intuitively, this distribution is the compatible distribution in which values are “as positively correlated as possible.” For example, observe that if all the marginals are identical, the in every valuation profile drawn from all the values are identical.
2.2 The MaxMin Pricing Problem
Objective.
For a given instance with marginals , denote by the seller’s expected revenue from setting a pricing if the valuation profile is drawn from a compatible distribution :
where is the item purchased by the buyer. We note that the buyer has to break ties between items that give her the same utility. We assume that tiebreaking between any two items depends only on the identities of the two items that give the same utility, the value of that utility, and the prices of the two items. The tiebreaking rule must also be consistent (no cycles). For example, breaking ties in favor of higherpriced items and then by index, or in favor of lowerpriced items and then by index, are both allowed.
In the maxmin pricing problem, the goal is to find a pricing that maximizes the minimum, over all compatible distributions , of the expected revenue .
We introduce the following notation to make this formal: Let be the robust revenue guarantee of , i.e., the worst expected revenue from over all compatible distributions:
and let be the optimal robust revenue guarantee over all pricings:
A pricing is maxmin optimal, for , if ; if , then we say that is a maxmin optimal pricing. The maxmin pricing problem is to find, for a given problem instance, an maxmin optimal pricing with as close as possible to 1.
Zerosum game and the Adversary’s perspective.
In the maxmin pricing problem, the seller can be viewed as a player in a zerosum game corresponding to the problem instance, in which the Adversary’s strategy space is the space of all compatible distributions. The seller’s payoff for “playing” a pricing against a compatible distribution is . The Adversary’s goal is to choose that minimizes . We refer to a distribution achieving (if it exists) as a best response of the Adversary to the pricing . (More generally one can consider a best response, which is a distribution achieving .)
Simple solution classes.
Arguably the two simplest possible pricing classes are the following.
Definition 2.2.
A pricing is a single price if all prices but one are .
Definition 2.3.
A pricing is uniform if all prices are equal.
A single price has a particularly nice robustness property: it is correlation agnostic. That is, its expected revenue is the same against any compatible distribution. I.e., for every compatible . Such robustness is a recurring theme in the literature on robust mechanism design [12]; in particular it makes the task of showing that is ()maxmin optimal much simpler.^{9}^{9}9E.g., it is sufficient to show a compatible distribution for which is the seller’s best response, similar to [11], or to show for every other pricing a compatible distribution such that , similar to [22]. In comparison, uniform pricings do not enjoy the robustness property, however they “dominate” single prices in the following sense: one can naturally turn any single price into a uniform pricing (by setting the prices of all items to be the same as that single price) such that for every .
3 The Adversary’s Best Response, and
Robust Revenue Guarantee Calculation
In this section, we will first present an algorithm that finds the exact best response of the Adversary for any given pricing (and along with the best response, also the robust revenue guarantee of this pricing) when every marginal is a uniform distribution over a finite multisets of values, and all such multisets are of the same size. This algorithm runs in polynomial time in the size of the input (for explicitly given such multisets). We will then show that for arbitrary finite distributions (even if the probabilities are not even rational numbers), we can still output the best response of the Adversary (and the robust revenue guarantee) using a slightly generalized version of this algorithm, and do so in polynomial time. Finally, we explain how to further use our algorithm to get arbitrarily close to the best response of the Adversary in the general case of arbitrary (not necessarily discrete) marginal distributions, where as we will show a precise best response may not exist. Proofs omitted from this section appear in
Appendix A.3.1 Perfect Couplings of Uniform Distributions over Multisets of Identical Sizes
In this section we assume that every marginal is a uniform distribution over a finite multiset of values, where the multisets corresponding to the various marginals are all of the same size . (In Section 3.2 we will show that the algorithm that we will present in the current section can be tweaked to to work for any discrete distribution, and still run in polynomial time.)
Given a pricing, we wish to find the worst distribution for those prices that is compatible with the marginals. We will show that there is a perfect coupling that minimizes the revenue over all compatible distributions.
To handle the possibility of the buyer not buying any item, we assume, without loss of generality, that one of the items is a special “null item” that always has value and will be priced at , such that the buyer buying this null item corresponds to the buyer not buying any item. Other than that we will treat the null item as any other item (assume it has price 0 and that its value is distributed uniformly over a multiset of all values, and thus the corresponding utilities are also all ).
Assume without loss of generality that items are ordered such that prices are nondecreasing, . Let be the vector of values of item sorted in nonincreasing order; i.e., (recall that by assumption, the value of item is drawn uniformly from the values ). Given the prices we can transform this vector to a vector of utilities that all have the same probability (each has probability ) with for every item and index . We thus obtain a vector of utilities of items sorted in nonincreasing order; i.e., .
We note that the Adversary’s best response may depend on the tiebreaking rule used by the buyer to choose among items that yield the same utility. Our algorithm for the Adversary’s best response will break ties in the same way as the buyer does. We say that is dominated by , and denote this by , if either , or and the buyer when facing the choice between buying item and buying item , either at utility , breaks this tie in favor of item .
A perfect coupling (or simply a coupling) in this setting corresponds to a bijection for each , from indices to the multiset of utilities (equivalently, to the the multiset of values) of item , where to draw the utilities for the items, an index is drawn uniformly at random, and then the utility for each item is determined according to the bijection of that item, applied to this index. We will think about the image of each index under all bijections as a chain, coupling together elements in the multisets of utilities, one element for each item. (So to draw the utilities for the items, one of these chains is simply drawn uniformly at random.) Hence, a (perfect) coupling can be described using chains that form a partition of all utilities, with each chain containing exactly one utility for every item (and every such chains describe a perfect coupling). Given a chain in the coupling, we denote the utility of item in by . A chain in which for all is said to be dominated by item —this is the item that will be bought if chain is drawn. The expected revenue from a given perfect coupling is simply the weighted average of the prices of the items, each weighted proportionally to the number of chains in this coupling that it dominates.
3.1.1 The Adversary’s Algorithm
The algorithm for the Adversary’s best response is given in Algorithm 1. It receives as input item prices and utility vectors for the items, with each sorted in nonincreasing order, and returns a coupling . When the algorithm is run, is initially empty, and is augmented by chains (and has its chains modified at times) throughout the course of the algorithm. We abuse notation and write if for some chain (and write otherwise).
The algorithm first attempts to build as many chains as possible that are dominated by item (Lines 3–9 when ), then as many chains as possible that are dominated by item (Lines 3–9 when ), etc. When building a chain dominated by a certain utility for item , the algorithm attempts to use the highest possible utility for each higherpriced item that would still be dominated by that utility for item , in order to leave lower utilities for item to possibly be dominated in future chains by lower utilities for item or by utilities for some . Just before turning to build chains dominated by this item , though, the algorithm has a transition stage (Lines 11–13) that recouples all of the chains built so far to use the lowest, rather than highest, utilities for item , since from that moment onward, a high utility for item is no longer a liability that we attempt to dominate by utilities for lowerpriced items (to have those items cannibalize from item ), but rather an asset with which to attempt to dominate utilities for even higherpriced items (to have item cannibalize from those items). Figure 1 illustrates an execution of the algorithm.
3.1.2 Optimality of Algorithm 1
Algorithm 1 returns a (perfect) coupling that defines a distribution that is compatible with the marginals. In this section we will show that no other perfect coupling defines a distribution that generates worse revenue. (In Section 3.2 we will show that no other distribution that is compatible with the marginals, whether or not induced by a perfect coupling, can generate any worse revenue.) Before we formally state this claim as Theorem 3.1, let us introduce some notation.
Let be the maximum number of chains dominated by item in any coupling. Let be the maximum number of chains dominated by one of items in any coupling. Given a coupling , let (respectively, ) be the number of chains in dominated by item (resp., by one of items ). A coupling such that is said to realize . As standard, we denote the set by . Our main result for this section is:
Theorem 3.1.
The coupling output by Algorithm 1 simultaneously realizes for every .
The proof of Theorem 3.1 relies on Propositions 3.3 and 3.2 stated below.
Lemma 3.2.
The coupling output by Algorithm 1 satisfies: (1) , (2) for every , c’; i.e., maximizes the number of chains dominated by item fixing the number of chains dominated by items .
Proposition 3.3.
There exists a coupling that simultaneously realizes for every .
The combination of Lemmas 3.2 and 3.3 implies Theorem 3.1, showing that no coupling defines a distribution that generates worse revenue than that defined by the coupling output by Algorithm 1. We now establish the proof of Proposition 3.3. The remainder of the proofs are relegated to Appendix A.
Proof of Proposition 3.3.
We first prove that there exists a coupling that simultaneously realizes and , and then show how to generalize this to any prefix. Suppose toward contradiction that for every coupling that realizes it is the case that . Let .
Let be the joint vector of utilities of items 1 and 2, sorted in nondecreasing order ( is of length ). We use the term top utilities of a utility vector to refer the highest utilities in the vector, breaking ties in favor of lower indexes.
Let be the set of the top utilities of item 1 and the top utilities of item 2 in . Sort the utilities in in nonincreasing order and couple them (in this order) with the highestpossible utilities of items into disjoint chains. This gives disjoint chains, partitioned into those rooted at item 1 and those rooted at item 2. (Note that every chain rooted at item 1 can be augmented with a suitable utility of item 2 such that the obtained chain is dominated by item 1, and analogously for chains rooted at item 2, since at most utilities of are coupled and all other utilities are smaller than the corresponding coupled ones.)
We will prove that the lowest chain rooted at item 2 can be replaced by a new chain rooted at utility , still realizing , thus contradicting the maximality of .
We will now define a process that we call sift&lift, which removes the chain rooted at the lowest utility of item 2 (), and “lifts” all subsequent chains rooted at item 1 (i.e., whenever possible, uses newly available higher utilities of items ). The sift&lift process is formally specified in Algorithm 2 in Appendix B, and an illustration is given in Figure 2. The following lemma, whose proof we relegate to Appendix A, shows that after the sift&lift process, one more chain dominated by item 1 can be added to the coupling.
Lemma 3.4.
After removing the chain rooted at and lifting the subsequent chains rooted at item 1, for every it is the case that is uncoupled and .
Lemma 3.4 implies that one more chain rooted at item 1 can be added, at the expense of the bottom chain rooted at item 2. Thus, the resulting coupling contains chains dominated by item 1, and chains dominated by one of items 1,2, contradicting the maximality of . We conclude that there exists a coupling that simultaneously realizes and , as desired.
We next show how to extend the proof to any . Suppose toward contradiction this is false. Let be the smallest value for which this is false, and let be the smallest value such that cannot be simultaneously realized with .
Repeat the same process as in the case of items 1,2, with items in the role of item 1, and items in the role of items 2. By the same argument as before, one can increase the number of chains dominated by one of items without affecting any for , contradicting the definition of . ∎
3.2 General Discrete Distributions, Computational Efficiency,
and Further Extensions
Recall that there are three gaps between Theorems 3.5 and 3.1, which we will prove in this appendix: the first gap is that Algorithm 1, as stated, works only for marginals that are uniform over multisets (implying applicability also for discrete marginals whose probabilities for the various values are rational numbers, but not irrational ones), the second gap is that it only guarantees that the coupling that we find generates the worst revenue of any coupling rather the worst revenue of any correlated distribution,^{10}^{10}10This is possibly the most subtle of the three gaps. To better understand it, note that for items, the Birkhoff–von Neumann theorem tells us that any distribution over pairs of utilities of the two items is a convex combination of distributions defined by a perfect couplings, and so there is a perfect coupling that generates at most as much revenue. For items, though, it is well known that the Birkhoff–von Neumann theorem fails to hold [30], and so there are distributions over tuples of utilities of the items that cannot be generated by a twostep process of first drawing a perfect coupling and then drawing a chain of values from that coupling. We will therefore have to derive our result not using such a general tool, but as property of the specific problem that we are considering. and the third gap is that the computational efficiency of the presented algorithm is polynomial in , so when applied to discrete distributions with arbitrary rational probabilities is actually polynomial in the lowest common denominator of all of these probabilities.
In this section, we show how to modify Algorithm 1 to bridge all three of these gaps in one fell swoop, by reinterpreting Algorithm 1 as a waterfilling algorithm that arbitrarily discretizes (for ease of presentation and analysis, but for no real constraint) its step size to . The idea underlying our modification of Algorithm 1 to bridge these three gaps is quite simple: instead of splitting every probability mass in advance of running the algorithm, we will split probability masses on demand. Formally, we will allow the algorithm to split each probability mass into several probability mass “nodes,” however unlike in Section 3.1, these nodes may have different masses. The algorithm will output a coupling of such nodes, where nodes may be coupled only if they all have the same mass. This will allow us to handle irrational probabilities, to compare to arbitrary correlated distribution (as any correlated distribution can be represented as a perfect coupling of such nodes of unequal masses), and will additionally maintain computational efficiency that is polynomial in the size of the support of the given marginals. The pseudocode of the algorithm is in fact virtually unchanged from that of Algorithm 1, however to interpret it for this scenario we introduce a few local semantic changes to our interpretation of this pseudocode:

We will continue to denote the vector of utilities from item price by , however we will now have the utilities in each be distinct, and we will say that only if the entire mass of is already coupled (that is, if this utility was split into smaller nodes, then all such nodes should be coupled for this to hold). The notation throughout Algorithm 1 should therefore be interpreted as saying that some mass of is still uncoupled.

In creftype 7 of Algorithm 1, when we write , we mean the following: let be the lowest remaining uncoupled mass of any of these utilities; split a new node of mass from each of these utilities, couple all of these nodes together, and add the resulting chain to . (The reinterpreted algorithm can therefore be thought of as a waterfilling algorithm of sorts.) We note that each execution of creftype 7 causes at least one utility of one item in to change from satisfying to satisfying , so the computational complexity does not explode.

In creftype 12 of Algorithm 1, the line captioned “i.e., in chain , replace with the smallest ,” we will be perform the following, which still implements the exact same comment: let be the mass of all nodes in . First, remove from , making that mass of uncoupled again (merging it back with any other uncoupled mass of ). Then, take the lowestutility uncoupled mass of item and add it to instead of the mass that we just removed from that chain (as in Section 3.1, there may be some mass that we removed and then added back, and this is fine). If the lowestutility uncoupled mass of item , which we wish to add to , happens to span more than one utility—say that it spans utilities—then we split the chain into chains with appropriate masses such that each of these utilities of could be coupled with a different one of these chains. (Once again, this is consistent with thinking of the reinterpreted algorithm as a waterfilling algorithm of sorts.) If this results in any chain that is over exact same utilities as an already existing chain, then we merge such chains.
To analyze the newlyinterpreted Algorithm 1, we redefine to be the maximum probability of item being sold in any correlated distribution compatible with the marginals, and to be the maximum probability of one of items being sold in any correlated distribution compatible with the marginals. Given a correlated distribution , let (respectively, ) be the probability of item (resp., one of items ) being sold in . A correlated distribution such that will be said to realize . Then, completely analogous arguments to those in the proof of Theorem 3.1 give that the newlyinterpreted Algorithm 1 simultaneously realizes for every , which we equivalently restate as the main result of this Section:
Theorem 3.5.
Given any prices , and discrete marginals , the correlated distribution generated by the Algorithm 1 with its semantics interpreted as in this section, attains the lowest expected revenue over all correlated distributions that are compatible with these marginals. Furthermore, if each marginal is given explicitly as a list of valueprobability pairs , then this algorithm runs in time polynomial in its input size.
Interestingly, since for the case of marginals that are uniform over multisets of values both interpretations of Algorithm 1—from Section 3.1 and from this section—coincide, Theorem 3.5 implies that for such marginals even the interpretation from Section 3.1 in fact generates the worst revenue among all correlated distributions compatible with the marginals, and not only among those defined by perfect couplings.
In Appendix C we discuss extensions of the techniques of this section to nondiscrete distributions.
4 Hardness of Approximation
In this section we will show a hardness of approximation for the maxmin pricing problem. Our hardness result will apply even when the support of every marginal is finite and the probability of sampling each value is a rational number. Note that this allows the distributions to be provided explicitly as input to a maxmin pricing algorithm, rather than through an oracle. We show that even under this direct access model, it is NPhard to compute prices that achieve an approximation, for any . This is true regardless of the way in which ties are broken in case of buyer indifference.
Theorem 4.1.
For any , it is NPhard to obtain an approximation to the maxmin pricing problem.
We will prove Theorem 4.1 by reducing from the maximum independent set (MIS) problem. Recall that in the MIS problem, the input is an unweighted graph and the goal is to find an independent set of maximum size.^{11}^{11}11Set is an independent set if no two nodes in are adjacent. It is known to be NPhard to achieve an approximation to the MIS problem, for any fixed .
4.1 Reduction Construction and Proof Outline
Given an MIS instance on vertices, we will construct an instance of the maxmin pricing problem as follows. First, order the vertices of by labeling them , arbitrarily. We will define marginal distributions. For each , distribution takes on value with probability . Moreover, for each vertex such that , distribution will take on value with probability . With all remaining probability will take on value .^{12}^{12}12Note that we can restrict attention to graphs for which is a sufficiently large perfect square, so that all of these values and probabilities are rational numbers.
The idea behind the above construction is as follows. First, to get nonnegligible (as a function of ) revenue from any item , even before taking into account any cannibalization, the price of item must be set close to . Indeed, such a price would yield revenue if that were the only item sold. A much lower price for that item (say, less than ) would not increase the sale probability of that item and would therefore yield negligible revenue from it, while any higher price for that item (say, greater than but at most for some ) would only sell item with probability around , resulting again in negligible revenue from it. So let us say that an item priced between and is reasonably priced.
If we reasonably price two items s.t. then, under an appropriate correlation between and (which sets value for item only when item has this value as well), item would cannibalize a fraction of the sale probability of item . Taking this one step further, if we reasonably price an item and not one but lowerindex neighbors of then, under an appropriate correlation of the marginals, the revenue from item would be completely cannibalized.^{13}^{13}13So the factor of by which some probabilities in the above construction are multiplied was chosen to be small enough that the revenue from an “unreasonably highly priced” item would be negligible, but large enough that a few lowerpriced neighbors of an item could together effectively cannibalize the revenue from that item. As it turns out, these are essentially the only meaningful cannibalizations possible under any correlation structure.
Reasonably pricing only items in an independent set, while pricing all other items at , would therefore obtain revenue at the order of the size of that independent set (see Lemma 4.2 below). Thus, if has a large independent set, then a high revenue guarantee can be obtained. On the other hand, if only has very small independent sets, then reasonably pricing many items would inevitably mean that most items have at least reasonablypriced neighbors, which makes it possible to cannibalize the revenue of most items (simultaneously) with an appropriate correlation structure (see Lemma 4.3 below). This paves the way to differentiating between instances with large and small independent sets based on their optimal robust revenue guarantees. In the remainder of this section we will make this argument precise.
4.2 Proof
To make the dependence on clear, we will write for the th marginal distribution for the instance constructed above. We will also write for the robust revenue guarantee of pricing for the corresponding problem instance, and similarly for and .
We will now establish upper and lower bounds on as a function of the size of the maximum independent set in . We begin with a lower bound on , which applies to any independent set (not just the maximum independent set).
Lemma 4.2.
If has an independent set , then .
Proof.
Given , we will construct a pricing such that . For each item , choose price . For each item , choose . Let be any distribution of values compatible with the marginals , and consider the revenue obtained under .
Choose some item . With probability we will have , in which case the utility player obtains from buying item is . Since is an independent set, there is no such that and are adjacent in , and therefore we cannot have for any . Therefore, if item is not purchased, then it must be that there is some item with value where . Taking a union bound over items, the probability of this event is at most . Thus the probability that and item is purchased is at least .
Taking a sum over all items in , the total revenue obtained is therefore at least
We next prove an upper bound on the maxmin revenue as a function of the size of the maximum independent set in . This direction is more subtle, as we must argue that no pricing achieves more than the claimed revenue bound.
Lemma 4.3.
If is a maximum independent set of , then .
Proof.
Choose any pricing . We will partition the items into three sets, based on their price: set contains all items for which . Set contains all items for which . Set contains all items for which . We think of as the items whose prices are much lower than their (singleitem) revenuemaximizing price, and of as the items whose prices are higher than their revenuemaximizing price. We will bound the maximum revenue obtainable from each of these three sets.
For each , the probability that is at most . So since , the total revenue generated through sales of is at most . Since , the total revenue generated from items in (in any distribution compatible with the marginals) is at most .
Next consider . Choose some , and suppose that where . Then the probability that is at most . So since , the total revenue generated through sales of is at most . Since , the total revenue generated from items in (in any distribution compatible with the marginals) is at most .
Finally we consider the set of all items for which . This is the most interesting case. Note that if with , we must have (from the definition of ). For each , write . That is, is the set of neighbors of with lower index. Write for the subset of nodes . That is, contains all nodes of that have fewer than neighbors with lower index within .
We claim that there is a distribution compatible with the marginals for which the revenue generated from the items in is at most . We define this (correlated) distribution by describing a process for sampling from the distribution. First, choose at most one item to have value , consistent with the marginals (i.e., with probability ). If , choose some uniformly at random and set , and set all other values (including the values of items not in ) to . Note that since implies , this process sets with probability at most , which is consistent with the marginal . In the event that or if no item has value , values can be correlated arbitrarily.
Under this distribution , when item has value , there is exactly one other item with such that , and all other items have value . Since we have , so item will be sold. Since we know that , we conclude that the total revenue that can be generated from the event that is at most . Further, the total probability that is at most , so the total revenue generated from sales of item due to events where is at most .
On the other hand, for any item , we know that and the total probability that is at most . Thus the total revenue generated by selling item is at most . Since , we conclude that the total revenue obtained from sales of items in is at most as claimed.
Finally, we claim that , where recall that is a maximum independent set in . We will prove the claim by using to construct an independent set . Start with . Starting with the highestindexed node from , say , add to and remove and all of ’s neighbors from . From the definition of , this removes at most nodes from . Then take the highestindexed node still in , add it to , and remove it and its neighbors from . Repeat this process until is empty. As we removed at most nodes from on each step, we have . And by construction is an independent set. By maximality of , we must therefore have , and hence . Rearranging yields as claimed.
The total revenue obtained from sales of items in is therefore at most . Adding in the revenue contribution from and (and recalling that our bounds for and hold for any distribution and therefore for ), the total revenue generated under distribution is at most . ∎
We are now ready to complete the proof of Theorem 4.1.
Proof of Theorem 4.1.
The hardness result for MIS directly implies the following: there is a class of graphs in which the maximum independent set size is either less than or greater than and it is NPhard to decide whether a given graph falls into the former category or the latter.
Given an algorithm for the maxmin pricing problem, consider the following procedure for the MIS decision problem described above. Given a graph , let be the pricing returned by algorithm on input instance . Given , compute the Adversary’s best response distribution (using the algorithm from Section 3) and then use this to compute . If our procedure declares that has an independent set of size at least , otherwise it declares that its maximum independent set size is less than .
We now claim that if the pricing algorithm has approximation factor at most
, then the procedure above classifies graph instances correctly. Suppose
has an independent set of size at least . Then by Lemma 4.2, . By the supposed approximation factor of the pricing algorithm, this means that for sufficiently large . The procedure therefore classifies such graphs correctly.On the other hand, suppose that the maximum independent set of has size at most . Then by Lemma 4.3, for sufficiently large , and hence as well. The procedure therefore classifies this class of graphs correctly as well, and is therefore correct in all cases.
We conclude that it is NPhard to achieve an approximation factor better than for any . Setting completes the proof. ∎
5 Uniform Pricing
In this section we study maxmin pricing instances where setting a uniform pricing (all items have the same price) or even a single price (a single item is offered at less than ) is maxmin optimal for constant . This kind of pricing is desirable due to its great simplicity, and in the case of a single price, also its agnosticism of the correlation (see Section 2.2). Since uniform pricings dominate single prices, we will mostly prove our positive guarantees for single prices and our lower bounds for uniform pricings. This only serves to strengthen our results, although in practice the seller may prefer uniform pricing as this can only increase revenue.^{14}^{14}14For the same reason, and since tiebreaking only arises for uniform pricings and not for single prices, in this section we can assume essentially without loss of generality that the buyer breaks ties in favor of higherpriced items.
Warmup: Identical marginal distributions.
It is not hard to see that simple uniform pricing and in fact single prices arise naturally as the optimal maxmin pricing in the case of symmetric marginals: When marginals are identical, the adversary can choose as the correlation, such that all items have the same value at every value profile in the support. The seller can do no better than to use a single price in this case, since the buyer will always purchase a lowestpriced item. The maxmin pricing is thus the Myerson monopoly price of the shared marginal distribution, set as a single price for an arbitrary item. To summarize:
Observation 5.1.
For every setting with identical marginals there exists a single price that is maxmin optimal.
We emphasize that the takeaway message from the identical marginals case is not that the seller should price items at their individual monopoly prices—in fact this pricing strategy could have dire robust revenue guarantees even in extremely simple cases due to cannibalization.^{15}^{15}15For example, consider items with marginals and where can be arbitrarily large. Let be the pricing based on individual monopoly prices, then since the buyer always prefers item 1. However, for the robust revenue guarantee is . This makes the unitdemand setting very different from the additive one. We expand upon this point in Section 5.2.
Section overview.
In this section we explore the extent to which we can extend the approach of using just one price to get maxmin (near)optimality. In Section 5.1 we show that if the marginals all have a monotone hazard rate (MHR), then a single price achieves a approximation (roughly of the maxmin optimum). The class of MHR marginals includes many wellstudied distributions such as uniform, normal, exponential, and logconcave distributions. In Section 5.2 we show the limitations of uniform pricing, by constructing an instance with regular marginals for which such a pricing is no better than a maxmin approximation. Our example highlights aspects in which (approximate) maxmin optimal pricing can be complex.
Section preliminaries.
In this section it will be convenient to assume that marginals are continuous and differentiable distributions with density functions denoted by for distribution .^{16}^{16}16The results hold for discretized versions as well, with appropriate definitions of discrete MHR and discrete regularity as have already been developed in the literature (see, e.g., [23]). We further assume that is strictly increasing for every . The inverse function is thus welldefined, and for every the value at quantile is also welldefined (namely, ). In Section 5.2 we also consider truncated versions of such distributions, in which we allow a nonzero probability mass on . The inverse function of a truncated distribution can still be easily defined: if then let .
5.1 ConstantFactor Approximation for MHR Marginals
A distribution with density is MHR if its hazard rate is (weakly) increasing as a function of the value . Our starting point is the following observation, which shows we cannot hope for better than a constant factor approximation:
Observation 5.2.
There is an instance with items and MHR marginals (in fact, uniform distributions) such that for some constant , no uniform pricing is maxmin optimal.
The proof is in Section D.1, by considering uniform marginals and .
Our main result in this section is that for MHR marginals, setting the maximum of the medians of the marginals as a single price (for the item with the corresponding marginal) is approximatelymaxmin optimal up to a small constant factor (recall that in practice one may prefer to use this price for all items uniformly):
Theorem 5.3 (Max median as a single price).
Consider a setting with MHR marginals and let be the maximum of the medians. Then a single price of for an item with median is maxmin optimal, and achieves a robust revenue guarantee against any joint distribution, where is the expected welfare with joint distribution .
we provide intuition for pricing using the maximum median. We remark that an advantage of such pricing is that even if the marginals for individual items are not fully known, the price can be easily estimated from a small number of samples
[1] (this can be done with separate samples for individual items, as recall that there is no access to the joint distribution). Another immediate implication of Theorem 5.3 is that, since the approximation guarantee for MHR marginals is achieved with respect to expected welfare, this guarantee holds for any mechanism, even complex randomized ones.5.1.1 Proof of Theorem 5.3
We begin by stating a property of MHR distributions that will be useful in the proof. This property reflects the fact that the exponential distribution is the “extreme” MHR distribution in the sense of lowest hazard rates and thus heaviest tail. Given an upper bound of
on the value at quantile of an MHR distribution , the next lemma provides an upperbound on its value at quantile , using the value at of an exponential distribution with parameter (i.e., the exponential distribution that at quantile has value ).Lemma 5.4 (MHR property).
Let be an MHR distribution and let be a quantile. If then
Comments
There are no comments yet.