1 Introduction
It is by now quite well understood that optimal mechanisms are far from simple: they may be randomized [Tha04, BCKW10, HN13], behave nonmonotonically [HR15, RW15], and be computationally hard to find [CDW13, DDT14, CDP14, Rub16]. To cope with this, much recent attention has shifted to the design of simple, but approximately optimal mechanisms (e.g. [CHK07, CHMS10, HN12, BILW14]). However, the majority of these works take a binary view on simplicity, developing simple mechanisms that guarantee constantfactor approximations. Only recently have researchers started to explore the tradeoff space between simplicity and optimality through the lens of menu complexity.
Hart and Nisan first proposed the menu complexity as one quantitative measure of simplicity, which captures the number of different outcomes that a buyer might see when participating in a mechanism [HN13]. For example, the mechanism that offers only the grand bundle of all items at price (or nothing at price ) has menu complexity . The mechanism that offers any single item at price (or nothing at price ) has menu complexity , and randomized mechanisms could have infinite menu complexity.
Still, all results to date regarding menu complexity have really been more qualitative than quantitative. For example, only just now is the stateoftheart able to show that for a single additive bidder with independent values for multiple items and all , the menu complexity required for a approximation is finite [BGN17] (and even reaching this point was quite nontrivial). On the quantitative side, the best known positive results for a single additive or unitdemand bidder with independent item values require menu complexity exp(n) for a approximation, but the best known lower bounds have yet to rule out that poly(n) menu complexity suffices for a approximation in either case. In this context, our work provides the first nearlytight quantitative bounds on menu complexity in any multidimensional setting.
1.1 Oneandahalf dimensional mechanism design
The setting we consider is the socalled “FedEx Problem,” first studied in [FGKK16]. Here, there is a single bidder with a value for the item and a deadline for receiving it, and the pair is drawn from an arbitrarily correlated distribution where the number of possible deadlines is finite (). The buyer’s value for receiving the item by her deadline is , and her value for receiving the item after her deadline (or not at all) is . While technically a twodimensional problem, optimal mechanisms for the FedEx problem don’t suffer the same undesirable properties as “truly” twodimensional problems. Still, the space of optimal mechanisms is considerably richer than singledimensional problems (hence the colloquial term “oneandahalf dimensional”). More specifically, while the optimal mechanism might be randomized, it has menu complexity at most , and there is an inductive closedform solution describing it. Additionally, there is a natural condition on each (the marginal distribution of conditioned on ) guaranteeing that the optimal mechanism is deterministic (and therefore has menu complexity ).^{1}^{1}1This condition is called “decreasing marginal revenues,” and is satisfied by distributions with CDF and PDF such that is monotone nondecreasing.
A number of recent (and notsorecent) works examine similar settings such as when the buyer has a value and budget [LR96, CG00, CMM11, DW17], or a value and a capacity [DHP17], and observe similar structure on the optimal mechanism. Such settings are quickly gaining interest within the algorithmic mechanism design community as they are rich enough for optimal mechanisms to be highly nontrivial, but not quite so chaotic as truly multidimensional settings.
1.2 Our results
We study the menu complexity of optimal and approximately optimal mechanisms for the FedEx problem. Our first result proves that the upper bound on the menu complexity of the optimal mechanism provided by Fiat et al.’s algorithm is exactly tight:
Theorem 1.1.
For all , there exist instances of the FedEx problem on days where the menu complexity of the optimal mechanism is .
From here, we turn to approximation and prove our main results. First, we show that fully polynomial menu complexity suffices for a approximation. The guarantee below is always , but is often improved for specific instances. Below, if the FedEx instance happens to have integral support and the largest value is , we can get an improved bound (but if the support is continuous or otherwise nonintegral, we can just take the term instead).^{2}^{2}2Actually our bounds can be be improved to replace with many other quantities that are always , and will still be welldefined for continuous distributions, more on this in Section 4.
Theorem 1.2.
For all instances of the FedEx problem on days, there exists a mechanism of menu complexity guaranteeing a approximation to the optimal revenue.
In Theorem 1.2, observe that for any fixed instance, as , our bound grows like (because eventually will exceed ). Similarly, our bound is always for any . Both of these dependencies are provably tight for our approach (discussed shortly in Section 1.3), and in general tight up to a factor of .^{3}^{3}3The gap of comes as our upper bound approach requires that we lose at most “per day,” while our lower bound approach shows that any mechanism with lower menu complexity loses at least on some day.
Theorem 1.3.
For all , there exists an instance of the FedEx problem on days with , such that the menu complexity of every optimal mechanism is .
1.3 Our techniques
We’ll provide an intuitive proof overview for each result in the corresponding technical section, but we briefly want to highlight one aspect of our approach that should be of independent interest.
It turns out that the problem of revenue maximization with bounded menu complexity really boils down to a question of how well piecewise linear functions with bounded number of segments can approximate concave functions (we won’t get into details of why this is the case until Section 4). This is a quite wellstudied problem called polygon approximation (e.g. [Rot92, YG97, BHR91]). Questions asked here are typically of the form “for a concave function and interval such that , , what is the minimum number of segments a piecewise linear function must have to guarantee for all ?”
The answer to the above question is [Rot92, YG97]. This bound certainly suffices for our purposes to get some bound on the menu complexity of approximate auctions, but it would be much weaker than what Theorem 1.2 provides (we’d have linear instead of logarithmic dependence on , and no option to remove from the picture completely). Interestingly though, for our application absolute additive error doesn’t tightly characterize what we need (again, we won’t get into why this is the case until Section 4). Instead, we are really looking for the following kind of guarantee, which is a bit of a hybrid between additive and multiplicative: for a concave function and interval such that , , what is the minimum number of segments a piecewise linear function must have to guarantee ?
At first glance it seems like this really shouldn’t change the problem at all: why don’t we just redefine and plug into upper bounds of Rote for ? This is indeed completely valid, and we could again chase through and obtain some weaker version of Theorem 1.2 that also references additional parameters in unintuitive ways. But it turns out that for all examples in which this dependence is tight, there is actually quite a large gap between and , and a greatly improved bound is possible (which replaces the linear dependence on with logarithmic dependence, and provides an option to remove from the picture completely at the cost of worse dependence on ).
Theorem 1.4.
For any concave function and any such that , , there exists a piecewise linear function such that with segments, and this is tight.
If one wishes to remove the dependence on , then one can replace the bound with , which is also tight (among bounds that don’t depend on ).
The proof of Theorem 1.4 is selfcontained and appears in Section 4. Both the statement of Theorem 1.4 and our proof will be useful for future work on menu complexity, and possibly outside of mechanism design as well  to the best of our knowledge these kinds of hybrid guarantees haven’t been previously considered.^{4}^{4}4Interestingly (and completely unrelated to this work), hybrid additivemultiplicative approximations for core problems in online learning have also found use in other recent directions in AGT [DJF16, SBN17].
1.4 Related work
Menu complexity. Initial results on menu complexity prove that for a single additive or unitdemand bidder with arbitrarily correlated item values over just items, there exist instances where the optimal (randomized, with infinite menu complexity) mechanism achieves infinite revenue, while any mechanism of menu complexity achieves revenue (so no finite approximation is possible with bounded menu complexity) [BCKW10, HN13]. This motivated followup work subject to assumptions on the distributions, such as a generalized hazard rate condition [WT14], or independence across item values [DDT13, BGN17]. Even for a single bidder with independent values for two items, the optimal mechanism could have uncountable menu complexity [DDT13], motivating the study of approximately optimal mechanisms subject to these assumptions. Only just recently did we learn that the menu complexity is indeed finite for this setting [BGN17].
It is also worth noting that other notions of simplicity have been previously considered as well, such as the sample complexity (how many samples from a distribution are required to learn an approximately optimal auction?). Here, quantitative bounds are known for the singleitem setting (where the menu complexity question is trivial: optimal mechanisms have menu complexity ) [CR14, HMR15, DHP16, GN17], but again only binary bounds are known for the multiitem setting: few samples suffice for a constantfactor approximation if values are independent [MR15, MR16], while exponentially many samples are required when values are arbitrarily correlated [DHN14]. In comparison to works of the previous paragraphs, we are the first to nail down “the right” quantitative menu complexity bounds in any multidimensional setting.
Oneandahalf dimensional mechanism design. Oneandahalf dimensional settings have been studied for decades by economists, the most notable example possibly being that of a single buyer with a value and a budget [LR96, CG00]. Recently, such problems have become popular within the AGT community as optimal auctions are more involved than singledimensional settings, but not quite so chaotic as truly multidimensional settings [FGKK16, DW17, DHP17]. Each of these works focus exclusively on exactly optimal mechanisms (and exclusively on positive results). In comparison, our work is both the first to prove lower bounds on the complexity of (approximately) optimal mechanisms in these settings, and the first to provide nearlyoptimal mechanisms that are considerably less complex.
Polygon approximation. Prior work on polygon approximation is vast, and includes, for instance, core results on univariate concave functions [Rot92, BHR91, YG97], the study of multivariate functions [Bro08, GG09, DDY16], and even applications in robotics [BGO07]. The more recent work has mostly been pushing toward better guarantees for higher dimensional functions. To the best of our knowledge, the kinds of guarantees we target via Theorem 1.4 haven’t been previously considered, and could prove more useful than absolute additive guarantees for some applications.
1.5 Organization
In Section 2, we formally describe the FedEx problem and recap the main result of [FGKK16]. In Section 3 we present an instance of the FedEx problem whose menu complexity for optimal auctions is exponential, the worst possible. In Section 4 we present a mechanism that guarantees a (1) fraction of the optimal revenue with a menu complexity of . We also explain the connection between approximate auctions and polygon approximation. In Section 5 we present an instance of the FedEx problem that requires a menu complexity of in order to approximate the revenue within . In Section 7 we use similar techniques to those of Section 3 to construct an example resolving an open question of [DW17].^{5}^{5}5Specifically, [DW17] ask whether the optimal mechanism for a single buyer with a private budget and a regular value distribution conditioned on each possible budget is deterministic. The answer is yes if we replace “regular” with “decreasing marginal revenues,” or “private budget” with “public budget.” We show that the answer is no in general: the optimal mechanism, even subject to regularity, could be randomized.
2 Preliminaries
We consider a single bidder who’s type depends on two parameters: a value and a deadline . Deterministic outcomes that the seller can award are just a day to ship the item, or to not ship the item at all (and the seller may also randomize over these outcomes). A buyer receives value if the item is shipped by her deadline, and if it is shipped after her deadline (or not at all).
The types are drawn from a known (possibly correlated) distribution . Let
denote the probability that the bidder’s deadline is
and the marginal distribution of conditioned on a deadline of . For simplicity of exposition, in several parts of this paper we’ll assume that is supported on . This assumption is w.l.o.g., and all results extend to continuous distributions, or distributions with arbitrary discrete support if desired [CDW16].In Appendix A
, we provide the standard linear program whose solution yields the revenueoptimal auction for the FedEx problem. We only note here the relevant incentive compatibility constraints (observed in
[FGKK16]). First, note that w.l.o.g. whenever the buyer has deadline , the optimal mechanism can ship her the item (if at all) exactly on day . Shipping the item earlier doesn’t make her any happier, but might make the buyer interested in misreporting and claiming a deadline of if her deadline is in fact earlier. Next note that, subject to this, the buyer never has an incentive to overreport her deadline, but she still might have incentive to underreport her deadline (or misreport her value).We will be interested in understanding the menu complexity of auctions, which is the number of different outcomes that, depending on the buyer’s type, are ever selected. If denotes the probability that a buyer with value and deadline receives the item, then we define the deadline menu complexity to be the number of distinct options on deadline (). The menu complexity then just sums the deadline menu complexities, and we will sometimes refer also to the “deadline menu complexity” as the maximum of the deadline menu complexities.
2.1 Optimal auctions for the FedEx problem
Here, we recall some tools from [FGKK16] regarding optimal mechanisms for the FedEx problem. The first tool they use is the notion of a revenue curve.^{6}^{6}6
For those familiar with revenue curves, note that this revenue curve is intentionally drawn in value space, and not quantile space.
Definition 2.1 (Revenue curves).
For a given deadline , define the revenue curve so that
Intuitively, captures the achievable revenue by setting price exclusively for consumers on deadline . It is also necessary to consider the ironed revenue curve, defined below.
Definition 2.2 (Ironed revenue curves).
For any revenue curve , define to be its upper concave envelope.^{7}^{7}7That is, is the smallest concave function such that for all . We say is ironed at if , and we call an ironed interval of if is not ironed at or , but is ironed at for all .
Of course, it is not sufficient to consider each possible deadline of the buyer in isolation. In particular, offering certain options on day constrains what can be offered on days subject to incentive compatibility. For instance, if some pair receives the item with probability on day for price , no bidder with a deadline will ever choose to pay . So we would also like a revenue curve that captures the optimal revenue we can make from days conditioned on selling the item deterministically at price on day . It’s not obvious how to construct such a curve, but this is one of the main contributions of [FGKK16], stated below.
Definition 2.3.
Let and Define for to :
Lemma ([Fgkk16]).
is the optimal revenue of any mechanism that satisfy the following:

The buyer can either receive the item on day and pay , or receive nothing/pay nothing.

The buyer cannot receive the item on any day .
Moreover, for any , and such that , is the optimal revenue of any mechanism that satisfy the following:

The buyer can receive the item on day with probability and pay , for any (or not receive the item on day and pay nothing).

The buyer cannot receive the item on any day .
Finally, we describe the optimal mechanism provided by [FGKK16], which essentially places mass optimally upon each day’s revenue curve, subject to constraints imposed by the decisions of previous days (a more detailed description appears in Appendix A, but the description below will suffice for our paper). First, simply set any price maximizing to receive the item on day (as day is unconstrained by previous days). Now inductively, assume that the options for day have been set and we’re deciding what to do for day . If the menu options offered on day are (interpret the option as “charge to ship the item on day with probability ”), think of this instead as a distribution over prices, where price has mass .^{8}^{8}8This is the standard transformation between “lotteries” and “distributions over prices” (e.g. [RZ83]). For each such price , it will undergo one of the following three operations to become an option for day .

If , move all mass from to .

If is not ironed at , and , keep all mass at .

If is ironed at , and , let denote the ironed interval containing , and let . Move a fraction of the mass at to , and a fraction of the mass at to .
Once the mass is resettled, if there is mass on price for , the buyer will have the option to receive the item on day with probability for price for any (or not at all). Note that due to case three in the transformation above, there could be up to twice as many menu options on day as day .
Theorem 2.1 ([Fgkk16]).
The allocation rule described above is the revenueoptimal auction.
3 Optimal Mechanisms Require Exponential Menu Complexity
In this section we overview our construction for an instance of the FedEx problem with integral values for each day and days where the deadline menu complexity of the optimal mechanism is for all (and this is the maximum possible [FGKK16]), implying that the menu complexity is . Note that the deadline menu complexity is always upper bounded by , so must be at least .
At a high level, constructing the example appears straightforward, once one understands Fiat et al.’s algorithm (end of Section 2). Every menu option from day is either “shifted” to , “copied,” or “split.” If the option is shifted or copied, it spawns only a single menu option on day , while if split it spawns two (hence the upper bound of ). So the goal is just to construct an instance where every option is split on every day.
Unfortunately, this is not quite so straightforward: whether or not an option is split depends on whether it lies inside an ironed interval in this curve, which is itself the sum of revenue curves (some ironed and some not), and going back and forth between distributions and sums of revenue curves is somewhat of a mess. So really what we’d like to do is construct the curves directly, and be able to claim that there exists a FedEx input inducing them. While not every profile of curves is valid, we do provide a broad class of curves for which it is somewhat clean to show that there exists a FedEx input inducing them.
From here, it is then a matter of ensuring that we can find the revenue curve profiles we want (where for every day , every menu option is split, because it is inside an ironed interval in ) within our class. We’ll highlight parts of our construction below, but most details are in Appendix B.
Lemma .
For any and , there exists an input to the FedEx problem such that:

is maximized at (that is, ) and has no ironed intervals.

For all , has a maximizer at price and has ironed intervals for .

(the ironed revenue curve) is a constant function for all .^{9}^{9}9Note that it is possible for two disjoint ironed intervals to have the same slope.

has the same ironed intervals as . In fact, , for some constant .
We include in Figure 2 a picture of the generated revenue curves for . As a result of this construction, we see that has ironed intervals, whose endpoints themselves lie in ironed intervals of . This guarantees that all menu options from day (which are guaranteed to be endpoints of ironed intervals) are split into two options on day . The proof of Theorem 3.1 (which implies Theorem 1.1) formalizes this.
Theorem 3.1.
The optimal mechanism for any instance satisfying the conditons of Lemma 3 has deadline complexity for all , and menu complexity .
4 Approximately Optimal Mechanisms with Small Menus
In this section, we describe a mechanism that attains at least fraction of the optimal revenue for any FedEx instance with menu complexity , which proves Theorem 1.2. Most proofs appear in Appendix C, but we overview our approach here.
Our main approach is to use the polygon approximation of concave functions applied to revenue curves. For a sequence of points in the domain of a function , the polygon approximation of a function with respect to is the piecewise linear function formed by connecting the points for by line segments. Thus, if the sequence has points, the function will have segments. For a concave function , the line joining and for any two points and , lies entirely below the function . Thus, for concave functions , we have for any sequence , the value of . Typically, for a ‘good’ polygon approximation, one requires for , that .
It turns out that the question of approximating revenue with low menu complexity boils down to a question of approximating revenue curves with piecewiselinear functions of few segments. The connection isn’t quite obvious, but isn’t overly complicated. Without getting into formal details, here is a rough sketch of what’s going on:

Recall the Fiat et al. procedure to build the optimal mechanism: menu options from deadline might be “split” into two options for deadline if they lie inside an ironed interval of . This might cause the menu complexity to double from one deadline to the next.

Instead, we want to create at most “anchoring points” on each revenue curve. For a menu option from deadline , instead of distributing it to the endpoints of its ironed interval, we distribute it to the two nearest anchor points.

By subsection 2.1, we know exactly how to evaluate the revenue lost by this change, and it turns out this is captured by the maximum gap between and the polygon approximation obtained to (this isn’t obvious, but not hard. See Appendix C).

Finally, it turns out that the deadline menu complexity with at most anchoring points is at most (also not quite obvious, but also not hard). So the game is to find few anchoring points that obtain a good polygon approximation to each revenue curve. section 4 of subsection C.4 describes the reduction formally, but all related proofs are in Appendix C.
Corollary .
Consider a FedEx instance with deadlines. For all , let be the function defined in 2.3, and let be a sequence of points in such that for all , we have . Then there exists a mechanism with deadline menu complexity (and menu complexity ) whose revenue is at least .
Here, denotes the optimal revenue of the FedEx instance.
At this point, it seems like the right approach is to just set each and plug into the best existing bounds on polygon approximation. In some sense this is correct, but the menu complexity bounds one would obtain are far from optimal. The main insight is that we know something about the curves we wish to approximate: for all , and we want to leverage this fact if it can give us better guarantees. Additionally, if all values are integral in the range , we wish to leverage this fact as well, as it implies that an additive loss is also OK, as . It turns out that both facts can indeed be leveraged to obtain much stronger approximation guarantees than what are already known (essentially replacing with in previous bounds), stated in Theorem 4.1 below.
Theorem 4.1.
For any and concave function such that , , ^{10}^{10}10We use to denote the right hand derivative and to denote the left hand derivative., there exists a sequence of at most points such that for all
The proof of Theorem 1.2 follows from section 4 and Theorem 4.1 together with a little bit of algebra, and is deferred until Appendix C.
Finally, we remark on some alternative terms that can be taken to replace in Theorem 1.2. It will become clear why these replacements are valid after reading the proof of Theorem 1.2, but we will not further justify the validity of these replacements here.

First, for instances with integral valuations, we may replace everywhere with . This is essentially because we don’t actually need to approximate on the entire interval , but only the interval .

We may further define for any (not necessarily integral, possibly continuous) instance, and replace everywhere with , even for nonintegral instances. This is essentially because we only used the integrality assumption to guarantee that .

Finally, if denotes the probability that the buyer has value at least and deadline at least , observe that . So if the probability of sale at each is at least , we may observe that (where is defined as in the previous bullet) and replace with everywhere.
The bullets above suggest that the “hard” instances (where some instancespecific parameter shows up in order to maintain optimal dependence on ) are those where most of the revenue comes from very infrequent events where the buyer has an unusually high value. Due to the intricate interaction between different deadlines, these parameters can’t be circumvented with simple discretization arguments, or by improved polygon approximations (provably, see Section 4.1), but it is certainly interesting to see if other arguments might allow one to replace with (for example) something like .
4.1 A tight example for polygon approximation
It turns out that the guarantees provided by Theorem 4.1 are tight. Specifically, if no dependence on is desired, then is the best bound achievable. Also, if it’s acceptable to depend on both and , then the bound of in Theorem 4.1 is tight. Taken together, this means that lies at the Pareto frontier of the dependences achievable as a function of both and . The examples proving tightness of these bounds are actually quite simple, and provably the worst possible examples (proof of the below claim appears in Appendix C)
Proposition .
Let be a concave function on , and let there be no polygon approximation of using segments for additive error . Then there exists a concave function over satisfying:

There is no polygon approximation of using segments for additive error .

, , , .

is piecewiselinear with segments.
5 Tightness of the approximation scheme
Finally, we construct an instance of the FedEx problem that is hard to approximate with small menu complexity. We try to reason similar to the example constructed in Section 3, but things are trickier here. In particular, the challenge in Section 3 was in mapping between distributions and revenue curves. But once we had the revenue curves, it was relatively straightforward to plug through Fiat et al.’s algorithm [FGKK16] and ensure that the optimal auction had high menu complexity.
Already nailing down the behavior of an optimal auction was tricky enough, but we now have to consider every approximately optimal auction (almost all of which don’t necessarily result from Fiat et al.’s algorithm (see, e.g. Section 4)). Indeed, one can imagine doing all sorts of strange things on any day that are suboptimal, but might somehow avoid the gradual buildup in the deadline menu complexity.^{11}^{11}11For example, an approximate menu could set price or with probability for shipment on any day, or something much more chaotic.
To cope with this, our approach has two phases: first, we characterize a restricted class of auctions that we call clean. At a very high level, clean auctions never make “bizarre” choices on day that both decrease the revenue gained on day and strictly increase constraints on choices available for future days. To have an example in mind: if the revenue on day is maximized by setting a price of , it might make sense to set price to receive the item on day instead, as this relaxes constraints on future days, and maybe this somehow helps when also constrained by menu complexity. But it makes no sense to instead set price : this only decreases the revenue achieved on day , and provides stricter constraints on future days (as now she has the option to get the item on day at a cheaper price).
For our example, we first show that all clean auctions that maintain a good approximation ratio must have high menu complexity. We then follow up by making the claims in the previous paragraph formal: any arbitrary auction of low menu complexity can be derived by “muddling” a clean auction, a process which never increases the revenue. A little more specifically, cleaning the menu for deadline can only increase the revenue and allow more options on later deadlines, without increasing the menu complexity. Formal definitions and claims related to this appear in Appendix D. We conclude with a formal statement of our lower bound, which proves Theorem 1.3.
Theorem 5.1.
Any mechanism for the FedEx instance described in subsection D.1 that has at most menu options on a day has revenue at most .
6 Conclusions and Future Work
We provide the first nearlytight quantitative results on menu complexity in a multidimensional setting. Along the way, we design new polygon approximations for a hybrid additivemultiplicative guarantee that turns out to be just right for our application (as evidenced by the nearlymatching lower bounds obtained from the same ideas).
There remains lots of future work in the direction of menu complexity, most notably the push for tighter quantitative bounds in “truly” multidimensional settings, where the gaps between upper (exponential) and lower (polynomial) are vast. We believe that continuing a polygon approximation approach is likely to yield fruitful results. After all, there is a known connection between concave functions and any mechanism design setting via utility curves, and low menu complexity exactly corresponds to piecewise linear utility curves with few segments. Still, there are two serious barriers to overcome: first, these utility curves are now multidimensional instead of singledimensional revenue curves. And second, the relationship between utility curves and revenue is somewhat odd (expected revenue is equal to an integral over the support of
), whereas the relationship between revenue curves and revenue is more direct. There are also intriguing directions for future work along the lines of oneandahalf dimensional mechanism design, the most pressing of which is understanding multibidder instances (as all existing work, including ours, is still limited to the singlebidder setting).7 Instances with regular distributions may require randomness
For singledimensional settings, it’s wellunderstood that “the right” technical condition on value distributions to guarantee a simple optimal mechanism is regularity. This guarantees that “virtual values” are nondecreasing and removes the need for ironing, even for multibidder settings. Interestingly, “the right” technical condition on value distributions to guarantee a simple optimal mechanism for 1.5 dimensional settings is no longer regularity, but decreasing marginal values. For example, if all marginals satisfy decreasing marginal values, the optimal mechanism is deterministic for the FedEx problem [FGKK16], selling a single item to a budgetconstrained buyer [CG00, DW17], and a capacityconstrained buyer [DHP17].
Still, regularity seems to buy something in these problems. For instance, Fiat et al. show that when there are only two possible deadlines, regularity suffices to guarantee that the optimal mechanism is deterministic. It has also been known since early work of Laffont and Robert that regularity suffices to guarantee that the optimal mechanism is deterministic when selling to a budgetconstrained buyer with only one possible budget [LR96]. But the extent to which regularity guarantees simplicity remained open (and was explicitly stated as such in [DW17]). In this section, we show that regularity guarantees nothing beyond what was already known. In particular, there exists an instance of the FedEx problem with three possible deadlines where all marginals are regular but the optimal mechanism is randomized. This immediately implies an example for a budgetconstrained buyer and three possible budgets as well (for instance, just set all three budgets larger than so they will never bind).
We proceed by describing now our instance of the FedEx problem where the optimal auction is randomized, despite all marginals being regular and there only being possible deadlines (recall that Fiat et al. show that the optimal auction remains deterministic for regular marginals and deadlines).Throughout this section, instead of using revenue curves , we use . This is in accordance to [FGKK16].
7.1 The setting
Consider bidders with types distributed as on day , on day , and on day .
The distribution over days is
Note that the three distributions are regular but don’t have decreasing marginal revenues.
7.2 Analysis
We use the iterative procedure described in [FGKK16] to find the optimal auction.
All real values written are approximate and end with “”. The case in boldface denotes the ironed interval. The optimal price for day is which is in that interval. Hence the optimal auction has to be randomized.
Appendix A Additional preliminaries
In this appendix, we summarize the approach in [FGKK16] to obtain the optimal mechanism for an arbitrary “FedEx” instance. We begin with the linear program that encodes this optimization problem.
maximize  (1)  
subject to  (Leftwards IC)  
(Rightwards IC)  
(Downwards IC)  
(Feasibility)  
(Individual Rationality) 
Note that we have not included the constraints where the bidder misreports a higher deadline. No rational bidder would consider these deviations since they would always get nonpositive utility. We now formally present the allocation curves described in Section 2. This, combined with the definitions of optimal revenue curves, provide a clean characterization of optimal auctions for any instance of the FedEx problem.
Definition A.1 (Optimal allocation curves [Fgkk16]).
Let be the largest such that . For any , consider two cases:

is not ironed at . Then

is ironed at . Let be the largest such that is not ironed at and be the smallest such that is not ironed at . Let be such that
Define
Then set as follows
where is the probability of allocating the item with value on day .
Lemma .
Remark A.1.
Every solution to LP 1 is an optimal mechanism for the FedEx problem.
In addition, we present this simple claim, which is unrelated to the FedEx problem itself, that will be useful in future sections.
Claim A.1.
Let be a distribution over , and let be the corresponding virtual value and revenue curve, respectively. Then, for all
Proof.
This follows from some simple calculations:
∎
Appendix B Omitted proofs and perturbed example of Section 3
b.1 Omitted proofs
Lemma .
Fix . Let and let be a sequence of real numbers such that . If , there exists a distribution such that .
Proof.
By our choice of , all elements in the sequence are greater than . Let and define
We will show that indeed corresponds to a valid distribution function by showing that it is nonnegative and nondecreasing. Once we have shown this, it becomes clear that , proving the Lemma.
Claim B.1.
is nonnegative, nondecreasing.
Proof.
For , . Suppose . Since , it follows that . To show it is nondecreasing, consider the difference between two consecutive terms. For all ,
For ,
∎
∎
We are now ready to explicitly construct an example that achieves deadline menu complexity of where . Fix and construct sequences of length , then
where denotes the th least significant bit in the binary expansion of , . Let as . Lemma B.1 implies that there exist distributions such that . In our construction the type distribution of day corresponds to the distribution . Let the distribution over days be uniform (i.e. ). In order to show that this construction achieves a menu complexity of we first need to characterize the revenue curves for all days, and then show that an optimal auction exists where prices can be set so as to create a large menu. The intuition for these revenue curves is that their ironed intervals are nested: prices at the endpoints of ironed intervals on day are the midpoints of new ironed intervals on day . In addition, after ironing the curves, they look like constants. Therefore, the optimal revenue curves will have the same ironed intervals as their original counterparts. We design the revenue of the first day to be maximized at the median value. On the next day this price will belong to an ironed interval, meaning that any optimal auction must offer a lottery over two prices that are not ironed for day 2. The size of the lottery offered directly translate into the minimum number of options a menu for that day must have. By the nesting construction, this will double the number of options offered on day with respect to day .
We can now use the above construction, combined with Claim A.1 and Lemma B.1, to show a more general result, Lemma 3.
Proof of section 3.
By simple examination, the revenue curve for day is just a line that increases until it reaches and then decreases, so it is maxed at . Consider day . The sequence that generates its revenue curve, is nonzero iff has remainder . Since there are values in the sequence, there will be nonzero values. These values will alternate between and each alternation creates one ironed interval: the revenue decreases by , stays at that value for a while, and increases by again. This gives ironed intervals for the revenue curve of day . Moreover, the last price where the revenue increases is at , making it a valid maximizer. The revenue remains constant from there on so any higher value is also a maximizer.
For the third point, note that takes values in , alternating between them for intervals whose lengths depend on . , the upper concave hull of , is then a constant function with value everywhere.
We prove the last point by induction, starting from . This holds because . Note that, from our previous claim, is a constant function. Suppose that for some . For , . For , . In either case, the term added to is a constant (and it is the same constant) by the inductive hypothesis, so the claim follows. ∎
Lemma .
The series of optimal revenue curves induced by the distributions is such that on any day , an optimal allocation curve as constructed by [FGKK16] is a step function with jumps and takes the following form:
where . Moreover, these prices where the function jumps on day will belong to ironed intervals for the optimal revenue curve of the following day, for all .
Proof.
We prove this also by induction going from the first day to the last. As stated before, is maximized at and has no ironed intervals, therefore it is clear and the optimal allocation curve corresponds to a step function at . By construction, this price belongs to an ironed interval of . Thus the base case holds. Suppose that the statement is true for day . We want to understand the optimal allocation curve for day .
First note that the places where the function jumps on day , , belong to ironed intervals of (which are the same intervals as ). This is because at these prices the function takes a value of . The nearest place where they are nonzero are exactly at and , where the function takes values and , meaning we are inside an iron interval (the revenue decreases and then increase by ). Thus by the optimal allocation curves suggested in [FGKK16] we observe that if on day we offered a price of with positive probability , we must also allocate at prices and on day with positive probability.
The probability for allocating the item with price depends on the parity of . If is odd then the values in this range correspond to a nonironed interval on day , meaning they preserve the probability of allocation from day . The probability of allocating on the interval with endpoints and , which contains our new interval of interest, is . If is even then we belong to an ironed interval on day , meaning that the probability of allocation is going to be the average of allocating on the two intervals on day that intersect this one. These intervals are and . Thus the probability of allocating at on day is just . ∎
b.2 Perturbed case
In this appendix, we show how to tweak the example in Section 3 so that no optimal auction has a menu complexity less than . The problem with the example in Section 3 is that we don’t have to follow the allocation suggestion of [FGKK16] in order to achieve an optimal auction because we could simply choose larger ironed intervals on every day that spanned the whole spectrum of prices and (because of the simplicity of the construction) still recover all the revenue. We add a small nonlinear term to the revenue curve of each day to dissuade from this while still preserving the ’nested’ structure of ironed intervals. Consider now the sequences
where and , the weight of the nonlinear term, is . The distribution for the first day is the same as the one we used for the previous case. Note that the nonlinear term added is maxed at . Again, Lemma B.1, Claim A.1 allow us to conclude there are revenue curves and valid distributions , whose changes are dictated by the sequence . We restate the results from 3 with the appropriate adjustments.
Lemma .
The series of revenue curves induced by the distributions satisfy the following properties:

is maximized at and has no ironed intervals.

for has a unique maximizer at price and has ironed intervals.

has the same ironed intervals as .
Proof.
The first point remains true since we haven’t changed . Sine was a maximizer of the function before adding the nonlinear term, it will remain a maximizer since it’s also optimal for that function. In fact, it is now the unique maximizer. This implies that . The last point follows from a similar inductive proof to the one in the original case. It is easy to see that now will be maximized at , for all . Then by the characterization from [FGKK16] we have that for any