Computing Bayes-Nash Equilibria in Combinatorial Auctions with Verification

12/05/2018 ∙ by Vitor Bosshard, et al. ∙ Universität Zürich Boston University 0

Combinatorial auctions (CAs) are mechanisms that are widely used in practice, and understanding their properties in equilibrium is an important open problem when agents act under uncertainty. Finding such Bayes-Nash equilibria (BNEs) is difficult, both analytically and algorithmically, and prior algorithmic work has been limited to solving simplified versions of the auction games being studied. In this paper, we present a fast, precise, and general algorithm for computing pure ϵ-BNEs in CAs with continuous values and actions. Our algorithm separates the search phase (for finding the BNE) from the verification step (for estimating ϵ), and computes best responses using the full (continuous) action space. We thoroughly validate our method in the well-studied LLG domain, against a benchmark of 16 CAs for which analytical BNEs are known. We further apply our algorithm to higher-dimensional auctions, which would have previously been considered intractable, by first introducing the new Multi-Minded LLLLGG domain with eight goods and six bidders, and then finding accurate and expressive equilibria in this domain. We release our code under an open source license, making it available to the broader research community.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 22

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A Combinatorial auction (CA) is a suitable mechanism used to allocate multiple, indivisible goods to multiple bidders. CAs allow bidders to express complex preferences on the space of all bundles of goods, taking into account that goods can be complements or substitutes [Cramton, Shoham,  SteinbergCramton et al.2006]. They have found widespread use in practice, including for the sale of radio spectrum licenses [CramtonCramton2013], for the procurement of industrial goods [SandholmSandholm2013], and for the allocation of TV ad slots [Goetzendorff, Bichler, Day,  ShabalinGoetzendorff et al.2015].

Unfortunately, the strategyproof VCG mechanism [VickreyVickrey1961, ClarkeClarke1971, GrovesGroves1973] has several serious flaws: most notably, it can lead to very low or even zero revenues despite high competition for the goods [Ausubel  MilgromAusubel  Milgrom2006]. Furthermore, it incentivizes shill bidding [Day  MilgromDay  Milgrom2008]. For these reasons, many CAs conducted in practice do not use VCG, instead opting for other mechanisms. One important class of payment rules are those which select a payment from the core [Day  MilgromDay  Milgrom2008]. Informally speaking, core payment are those that guarantee an envy-free auction outcome, even in the presence of bidder coalitions. While this includes simple rules such as first price, in practice a payment rule selecting a point from the minimum revenue core (MRC) is often chosen.

Core-selecting auctions are not strategyproof in general, and the behaviour of bidders in them is not well understood. If we want to predict auction outcomes in terms of desirable properties (such as incentives, revenue, efficiency), we must therefore study them in equilibrium instead of at truth. As a first step, this requires us to choose a suitable equilibrium concept.

1.1 Equilibria in CAs

In the full information setting, significant theoretical work has been done towards characterizing the equilibria of CAs. For every MRC-selecting payment rule, every point in the MRC (relative to true bidder valuations) is supported by a Nash equilibrium [Day  RaghavanDay  Raghavan2007]. The equivalent result for first price auctions was shown in a classical paper by bernheim1986menu bernheim1986menu .111See [MilgromMilgrom2004, Chapter 8.2] for a modern treatment.

While the full information Nash equilibrium (NE) may be a good approximation of bidder behaviour in some settings (e.g. repeated auctions where bidders know each other well), it has several issues: there can be a high multiplicity of equilibria, and each equilibrium must be supported by very precise bids on losing packages, even though bidders (who know they are going to lose) have no incentive whatsoever to bid in this way. Furthermore, many real-world, high-stakes auctions are only conducted once, and bidders work hard to keep their private information secret.

In an incomplete information setting, such private information is explicitly modelled by bidders who know their own valuation, but only have a prior belief (i.e. a distribution) over the valuations of others. This leads to the solution concept of the Bayes-Nash Equilibrium (BNE), where bidders maximize their expected utility over many possible auction outcomes weighted according to their beliefs. The inherent uncertainty of this process also contributes to a more stable bidding behaviour.

Some analytical research into BNEs already exists. Non-combinatorial single-item auctions have been studied extensively, of course [KlempererKlemperer1999]. Comparatively little is known about multi-item auctions, as the difficulty of finding BNEs by hand markedly increases, requiring the solution of challenging differential equations. For this reason, only a few analytical results exist in small settings, most notably the Local-Local-Global (LLG) domain (which we define in Section 2.2). AusubelBaranov2018core AusubelBaranov2018core as well as Goeree2013OnTheImpossibilityOfCoreSelectingAuctions Goeree2013OnTheImpossibilityOfCoreSelectingAuctions have independently derived the analytical BNE of the VCG-nearest or “Quadratic” rule, which is the payment rule most commonly used in practice [Day  CramtonDay  Cramton2012]. Furthermore, AusubelBaranov2013CoreOldVersion AusubelBaranov2013CoreOldVersion have also derived analytical BNEs of three other core-selecting rules. For the first price payment rule, baranov2010exposure baranov2010exposure provides some necessary properties of BNEs, but doesn’t fully characterize them.

In order to be able to study more and especially larger auction settings in BNE, algorithms capable of numerically finding such equilibria are clearly needed.

1.2 Prior Algorithmic Work on Computing BNEs

Computer scientists have long worked on algorithms for computing equilibria in non-cooperative games. The Gambit software package provides a number of algorithms to find NEs and BNEs [McKelvey  McLennanMcKelvey  McLennan1996, McKelvey, McLennan,  TurocyMcKelvey et al.2016], but only for finite games (with finite type and action spaces). Solving auction games with even a modest number of types (valuations) and actions quickly becomes infeasible with these general solvers; therefore, infinite games can only be modeled with significant loss of fidelity.

This is why researchers have turned towards developing special-purpose algorithms for computing BNEs in CAs. To make the computation tractable, all methods that have been proposed to date are limited in some way: restricting the type space, the action space, the size and complexity of the game, or considering a simpler equilibrium concept. Importantly, all numerical algorithms actually search for an -BNE, i.e., a strategy profile where each player can only benefit (in expectation) by at most in utility by deviating unilaterally. One important class of BNE algorithms is based on iterated best response (also known as fictitious play). The algorithms proposed by reeves2004computing reeves2004computing , vorobeychik2008stochastic vorobeychik2008stochastic and Rabinovich2013ComputingBNEs Rabinovich2013ComputingBNEs belong to this class. To keep the computation manageable, all three algorithms restrict the strategy space: using piecewise linear strategies, multiplicative shading strategies, or a finite set of actions. One limitation of these algorithms is that they can only solve games with restricted strategy spaces, because their -BNEs are only valid within the space over which they search for best responses.222Please note that vorobeychik2008stochastic vorobeychik2008stochastic [Sec. 7.4] correctly state this limitation of their algorithm. Rabinovich2013ComputingBNEs Rabinovich2013ComputingBNEs also handle this issue correctly by only claiming to find the BNE in the “game with the restricted strategy space.”. reeves2004computing reeves2004computing restrict themselves to a class of auctions where the best response is guaranteed to lie in the restricted strategy space. Extrapolating the BNEs produced by their algorithm to obtain a solution for the auction game played in the full strategy space would lead to the false precision problem, which we discuss in Section 3.

1.3 Overview of our Approach

In this paper, we introduce a fast, general algorithm for computing pure -BNEs in CAs with continuous values and actions. Our approach is also based on the iterated best response algorithm, but highly optimized to CAs. We introduce two key ideas that separate us from prior work: (1) our algorithm is split into a search phase to find the BNE (where we operate with a coarse estimate of the ), and a verification step to robustly compute for the found -BNE; (2) in every best response calculation, we use the full (continuous) action space. These ideas allow us to achieve highly accurate results, while properly dealing with the difficulties introduced by infinite games.

In Section 4, we introduce the algorithm in detail. In the search phase, we use piecewise linear strategies, similar to reeves2004computing reeves2004computing . However, the usage of the verification step is novel, and we present two possible variants of it: We can either estimate with high accuracy, or construct an upper bound on . The latter approach is laid out in Section 8. It works for CAs with quasi-linear utilities and independently distributed valuations, and the bound it produces considers all possible valuations, thus allowing us to formally prove that a strategy profile is an -BNE. Our approach is the first to achieve such a guarantee for infinite games333Note that an infinite amount of values is no more difficult to handle than a finite, but very large amount of them. In both situations, it is impossible to perform computations for each individual value, forcing us to reason in terms of ranges of values. See Section 8 for details. without restricting the strategy space.

In Sections 5 and 6, we offer numerous techniques for reducing the runtime of the BNE algorithm. We benchmark these techniques in the widely used LLG domain, matching known results with high precision, as shown in Section 7. In Section 9, we introduce the new Multi-Minded LLLLGG domain, with eight goods and six bidders, and apply our algorithm to find an estimated 0.01-BNE in this domain. To the best of our knowledge, our algorithm is the first to find such an accurate BNE in a CA of this size. Finally, we release the full source code of our algorithm, and we provide a high-level overview of our codebase in Section 10.

2 Preliminaries

2.1 Formal Model

Combinatorial Auctions

A combinatorial auction (CA) is a mechanism used to sell a set of goods to a set of bidders. For each bundle of goods , each bidder has a value , and submits a (possibly non-truthful) bid . We assume that each bidder only bids on a limited number of bundles of interest (typically a true subset of all possible bundles). For a fixed , the bid can thus be represented by a point in the action space , with bids on all other bundles implicitly being . The bid profile

is the vector of all bids, and the bid profile of every bidder except

is denoted . The CA has an allocation rule assigning bundle to each bidder , which always produces the efficient allocation: it maximizes reported social welfare (the sum of all winning bids) subject to the allocation being feasible (every item given to at most one bidder), by solving what is known as the winner determination problem.

The CA also has a payment rule which is a function assigning a payment to each bidder. We let denote bidder ’s utility, given his own valuation , bid and all other bidders’ bids . Note that implicitly encompasses the allocation and payment rule of the mechanism.

CAs as Bayesian Games

We model the process of bidding in a CA as a Bayesian game.444

In the game theory literature, a bidder would be called a

player, and his valuation would be called his type. Each bidder knows his own valuation , but he only has probabilistic information (i.e. a prior) over each other bidder ’s valuation

, represented by the random variable

. The joint prior is common knowledge and consistent between bidders. Each bidder chooses a strategy , which is a function mapping all his possible valuations to bids. We assume that all strategies are pure, i.e. is a function mapping values to bids. The expected utility of bidder with value when bidding is given by

(1)

where is the random variable corresponding to the bids of all other bidders, which can depend on the realization of (when the distributions of valuations are not mutually independent). Whenever a bidder submits a bid that is not optimal, he is “leaving on the table” a certain amount of utility. We call this quantity the utility loss, given by

(2)

Note that we take the supremum over bids instead of the maximum, because the maximum might not exist due to discontinuities in the utility.555To see this, consider e.g. a single-item first price auction with complete information. If opponents bid a maximum amount of , the best response is often to outbid them by a small amount, i.e. bid . Analogues of this situation can arise even in incomplete information settings, where a discontinuity arises due to

’s bid crossing over a threshold where his probability of winning a certain bundle jumps by a discrete amount. Such thresholds are caused by point masses in the distribution

, which can occur even if the distribution of itself is smooth, e.g. when strategies have flat segments.

Bidders are in an -equilibrium when the utility loss is small for all possible valuations of all bidders, i.e. no bidder has a profitable deviation from the equilibrium netting him more than utility:

Definition 1.

An ex-interim -Bayes-Nash equilibrium (-BNE) is a strategy profile such that

We take the -BNE as the solution concept because we use numerical algorithms with limited precision to find the BNEs. Thus, when we solve a CA, we mean that we find an -BNE, where is a suitably small constant.

Remark 1.

While we present our results using the absolute utility loss to establish the notion of an -BNE, our approach can alternatively use the relative utility loss, defining it as

Only some minor technical adjustments for Theorem 1 are needed.

2.2 The Llg Domain

We study the performance of our algorithm both in a small domain, where analytical results are available, and later in a novel larger domain (see Section 9). For the former we turn to the widely-used Local-Local-Global (LLG) domain [Ausubel  MilgromAusubel  Milgrom2006]. In LLG there are two local bidders, each of whom is interested in a single good, and a global bidder who is interested in the package of both goods. AusubelBaranov2013CoreOldVersion AusubelBaranov2013CoreOldVersion study the case where the global bidder is drawn from , while the local bidders’ valuations are distributed according to for parameter and perfectly correlated with probability . Within this framing, they provide analytical results for four different core-selecting payment rules (Quadratic, Nearest-Bid, Proxy and Proportional). Adopting their results as our benchmark, we assemble a set of 16auction settings to be used as a test suite: four payment rules each applied to four domains ().

To match the analytical results of AusubelBaranov2013CoreOldVersion AusubelBaranov2013CoreOldVersion , we search for symmetric equilibria (though this simplification is not essential to our algorithm). Furthermore, in LLG any auction with a core-selecting payment rule is truthful for the global bidder, so we do not need to explicitly model his strategy. Accordingly, an LLG strategy profile is described by the symmetric local bidder strategy .

3 The False Precision Problem

In this section, we discuss several limitations of prior algorithmic approaches that could lead to a false precision problem. When an algorithm for finding BNEs is designed, there are many modelling choices that can be made to speed up the resulting algorithm. However, such choices often distort the auction game in meaningful ways, and thus the equilibrium that is calculated might not be as good as the algorithm reports. In some sense, the algorithm would need to be able to “look at itself from the outside” to calculate the magnitude of such a distortion. Our algorithm framework introduced in Section 4 is based on this idea of introspection.

To make the discussion more concrete, we present three different examples of this problem: (1) using a restricted action space, (2) computing ex-ante instead of ex-interim BNEs, and (3) not considering the full bundle space of bidders.

3.1 Restricted Action Spaces

Many BNE algorithms restrict the action space in some way during the search for the -BNE, to keep the computation time manageable. However, a problem may arise when the algorithm has converged to the final strategy profile, and the of the -BNE must be reported. The reason is that restricting the action space induces a separate game distinct from the original auction game. If the final strategy profile is only evaluated in the restricted game, then the computed is accurate in the restricted game, but not in the original game. Consider the following simple but striking thought experiment: We search for a BNE in a non-strategyproof CA, restricting the action space to only one action, namely bidding truthfully. Any iterated best response algorithm will immediately find an -BNE with , as there is no beneficial deviation. Obviously, this -BNE only “survives” in the restricted action space, but not in the full action space. This examples illustrates that, if one is interested in finding the BNE of the game with the original action space, then this needs to be handled explicitly.

Restricting the action space of a game is also known as action abstraction [SandholmSandholm2015]. For some types of finite games, abstraction methods have been developed which guarantee that any equilibrium of the abstract game can be translated into an equilibrium of the original game, with a bound on how much the translation affects the solution quality [Sandholm  SinghSandholm  Singh2012]. Unfortunately, no such methods exist for infinite games. A recent algorithm proposed by bosshard2018nondecreasing bosshard2018nondecreasing could be interpreted as a kind of action abstraction for auctions, but it is only applicable to auctions with a specific kind of payment rule.

Our algorithm sidesteps this issue by always considering the full action space in the best response calculation, and finding best responses using numerical methods.

3.2 Ex-ante vs. ex-interim BNEs

The second problem with “false precision” refers to the equilibrium concept being used: whether an ex-ante or an ex-interim -BNE is being computed. Interestingly, this issue mostly shows up when the strategy space is restricted to one-parameter multiplicative or additive shading strategies (as in lubin2009quantifying lubin2009quantifying , schneider2015risk schneider2015risk , and Lubin2015AMMAAbstract Lubin2015AMMAAbstract ).

The underlying assumption of a one-parameter shading strategy is that one shading factor is applied uniformly, across all valuations. E.g. in the case of multiplicative shading, strategies are of the form , with parameter . This means that a bidder must choose his strategy and commit to it ex-ante, before knowing his own valuation, giving rise to the notion of an ex-ante -BNE: every bidder only knows the distribution of his own and others’ valuations, and chooses a best response from a limited set of functions, e.g. linear functions. The then bounds the average benefit from deviating from this strategy across all valuations. This is in contrast to the ex-interim -BNE we compute: each bidder knows his own valuation, the best response in the -BNE is computed separately for each possible valuation, and the bounds the maximum benefit from deviating across all types.

It is clear that ex-interim strategies can be arbitrary functions (mapping each valuation to a bid, independently of each other) and are thus fully expressive. Since the best responses that are used to compute also benefit from this expressiveness, an ex-interim -BNE provides a significantly stronger guarantee than an ex-ante one. Under the former equilibrium concept, bounds the maximum gain possible by deviating from equilibrium at any valuation. Under the latter, only the average gain from deviating at every valuation at the same time is bounded.

Sometimes, an auction designer may truly be interested in an ex-ante rather than an ex-interim BNE. However, ex-interim BNEs are arguably more realistic/interesting, because it makes sense to assume that bidders know their own valuation. Thus, when using multiplicative/additive shading strategies, the use of an ex-ante BNE arises as an artifact of the choice of the strategy space, and is not otherwise justified. Therefore, we make the fundamental design choice that our algorithm should always search for ex-interim BNEs.

We want to make clear that using ex-ante BNEs as a solution concept can be the correct choice in some settings. For an example outside auction theory, consider the game of poker, where we have an informational setup corresponding exactly to the BNE framework: each player has a private type, namely the cards in his hand, and knows the exact distribution of all other players’ types. When playing a single hand of poker, the ex-interim BNE is clearly the correct solution concept, maximizing the expected payoff of whatever hand was drawn. However, when playing many hands in a row, the situation changes. Poker is a sequential game where actions are observed by all opponents, and every action leaks part of a player’s private information. This makes it possible to increase the expected profit of some hands by decreasing that of others.666In a nutshell, this is achieved through bluffing: occasionally overplaying weak hands makes strong hands less detectable by opponents, thus increasing the expected profit of strong hands. The ex-ante solution concept makes most sense in this case, since players’ goal is to maximize the sum of profits over the entire sequence of hands, not the profit of a single hand considered in isolation.

3.3 Restricted Set of Bundles

The last example of false precision is harder to overcome than the previous ones. It concerns the space of bundles a bidder is allowed to bid on. In our formal model (Section 2.1) we introduced the notion of “bundles of interest”, encoding the assumption that bidders only bid on an exogenously fixed subset of distinct bundles, rather than the whole bundle space (of size for goods). The bundles of interest are usually chosen to model “straightforward bundle bidding”, where bundles of interest are those with positive marginal value.777We say that a bundle has positive marginal value if it contains no goods that are redundant/useless for the bidder. This encompasses all bundles for which it is not possible to remove any subset of goods without strictly decreasing the bundle’s value. Formally, .

This is in line with how most of the literature has treated e.g. the LLG domain, with the notable exception of [Beck  OttBeck  Ott2013], who explicitly study the phenomenon of overbidding. Overbidding is a family of strategic manipulations in which a bidder bids above his value, typically on a bundle which he has no intention of winning, to achieve a lower payment for the bundle of goods he does intend to win. When a BNE algorithm restricts its search to strategies only involving bundles of interest, it may miss such overbidding opportunities and thus report an that is too low. Note that overbidding can occur in equilibrium, even in settings with as few as bundles of interest (see Section 9 for a novel example).

Unfortunately, designing a BNE algorithm that scales well in is an unsolved, possibly unsolvable problem. If the full bundle space was considered, the integral needed to compute expected utilities (Equation 1) would have exponentially many dimensions, making even the best Montecarlo methods impractical. The best we can do in practice is to point out any restrictions that we impose on the bundle space (as we do in this paper), and highlight where our algorithm is critically dependent on the parameter .

4 BNE Algorithm Framework

The key property distinguishing our algorithm from prior work is that we separate the search phase (finding the BNE) from the verification step (robustly estimating the of the found BNE).

4.1 The Search Phase

The Iterated Best Response Algorithm

Data: Mechanism , Distribution of bidder valuations
Result: -BNE strategy profile
1 truthful strategies repeat
2       foreach bidder  do
3            
4       end foreach
5      
until  return
Algorithm 1 Iterated Best Response (with Verification)

At the core of our algorithm’s search phase is the well-known iterated best response algorithm, also known as fictitious play [BrownBrown1951], presented in Algorithm 1. This algorithm proceeds in rounds. In each round, each bidder’s new strategy is computed via BestResponseStrategy as a response to the strategy profile from the previous round. The algorithm terminates when the utility loss across all bidders is small enough.

The best response of bidder is a function maximizing ’s utility at each possible valuation :

(3)

where the expected utility is taken with respect to the strategy profile of the previous round. Prior applications of iterative best response have mostly been in the realm of finite games, where there are only finitely many valuations and actions.

When we try to apply this paradigm to our (continuous) auction settings, we run into two problems. First, in order to optimize as given in (3), we need to do so over the entire continuum of possible valuations , which means that the search for a best response needs to be performed over the space of all functions. Second, at each valuation , finding the maximum of is computationally expensive, and can only be approximated with numerical algorithms.

Next, we show in detail how we instantiate Algorithm 1 to deal with these two problems. Fortunately, iterated best response is a very robust procedure, so it still converges very often, even when given only approximate best responses.

Modeling Strategies

As mentioned above, a correct implementation of BestResponseStrategy would need to conduct its search over the space of all functions, which is clearly infeasible. To address this problem, we specify a restricted strategy space to be used in the search, namely a family of functions parametrized by a finite set of control points. While the search for best responses is performed within this restricted space, we later verify that the final BNE we find is still valid in the full space.

There are many restricted strategy spaces that one might use (e.g., piecewise constant functions, splines, etc); in this work, we adopt piecewise linear functions for our strategy space. When using piecewise linear strategies, the control points are simply elements of the value space. A strategy then only specifies the bid at each control point, and bids are interpolated for valuations between control points. We find piecewise linear strategies to be particularly attractive as they are simple (and thus fast to evaluate) but can approximate any bounded function well, given a sufficient number of control points. For

LLG, we use control points (unless otherwise noted), as this is sufficient for convergence to in all auction settings in our test suite.

Pointwise Best Responses

To compute a best response with piecewise linear functions, we need to compute many pointwise best responses, i.e. maximize for a fixed valuation corresponding to a control point.

Unfortunately, finding the expected utility for a single valuation/bid pair requires solving a computationally challenging integral, since the expectation ranges over the strategies of all other bidders, as shown in Equation 1. For an auction with bidders each of which has bundles of interest, the dimensionality of this integral is . To make matters worse, we must potentially try many different bids , and thus solve this integral many times.

We devote Section 5 to methods for making this calculation and the Update step in Algorithm 1 practical. This is no easy task, since the expected utility may be non-convex and/or non-differentiable in (as discussed in Section 2.1), and is only given in black-box form. Thus, numerical methods may only find an approximate local optimum. Given this, we employ a sophisticated version of pattern search to compute best responses that are as accurate as possible, even in multiple dimensions (see Section 5.5).

4.2 The Verification Step

In the search phase, we are free to simplify the strategy space or make use of any other heuristics, as described above. As a consequence, we only have a rough estimate of the

of our current strategy profile, namely the utility loss at each control point. This estimate is precise enough to allow for a stopping criterion that decides when to break out of the search, returning a candidate equilibrium . However, to show that is an -BNE, it is required to bound the utility loss over the entire value space.

To cope with this, we employ a Verification step in our algorithm, as shown in Algorithm 1. This step makes sure that the utility loss is small not only at the control points, but also at values between control points, where bids are interpolated linearly and not directly optimized.

Since the value space is continuous, it is not possible to check the loss for all individual valuations. We have two ways of dealing with this: either estimate more precisely than in the search step, or find an upper bound for by theoretical methods.

Estimating the

The former approach is very straightforward. To get a more precise estimate of , we simply need to compute the utility loss at more points than the original control points. We choose a very fine grid of verification points ( in the case of LLG) and compute a best response at each of those points, using twice the number of Monte Carlo samples and twice the number of function evaluations in our pattern search (which will be introduced in Section 5). The maximum loss we find in this way is a lower bound for , but in practice we have found it to be a very good approximation of the true (see Section 8 for a comparison of lower and upper bounds).

Bounding the

While estimating the through a tight lower bound works well in practice, we would prefer the theoretical guarantee that comes with an upper bound.

Finding such an upper bound for is not an easy task, however: there is a continuum of valuations to consider, and the -BNE solution concept requires taking the maximum utility loss found at any valuation. The maximum is not a well-behaved statistic, in the sense that even as we consider more and more valuations, we cannot guarantee that we will reduce the gap between the worst utility loss we have observed and the worst utility loss that exists. This is a different dynamic than e.g. computing the average of a function, where taking more and more samples guarantees convergence towards the exact solution, even in continuous domains. It is still possible to bound the utility loss over the entire value space if we reason about entire intervals of values, using properties of the endpoints and to bound the loss at any valuation between them.

In Section 8, we provide one way of achieving such a bound, given as Theorem 1. The technique we use to achieve this bound does not work with arbitrary strategy profiles, but requires all strategies to be piecewise-constant. Therefore, we need a ConvertStrategies step between the convergence of the iterated best response procedure and the verification step. Algorithm 1 returns both the bound on the utility loss, as well as the strategy profile to which the bound applies.

It is possible that ConvertStrategies actually takes us further away from equilibrium, but we have not observed this in practice, and the change to the strategy profile required by Theorem 1 is very minor. Even if this were to happen, the bound that we compute with the help of the theorem would still be accurate.

5 Computing Best Responses

In this section, we focus on the computation of pointwise best responses as given in (3). This optimization problem is performed in the innermost loop of our algorithm, and is called many, many times. Making the best response computation as fast as possible is essential to keep the runtime of the overall algorithm manageable. Given a fixed value , computing the expected utility for a single bid requires solving the integral

(4)

where is the joint PDF associated with the distribution of valuations of all other bidders, marginalized over . In this paper, we approximate the value of Equation (4) via Monte Carlo (MC) integration (i.e., numerical integration via random sampling) because it is robust to discontinuities and scales to high-dimensional spaces. This is important when we turn to larger CAs (our ultimate goal) in Section 9. However, we first consider the simpler LLG setting because it will enable us to compare our -BNE solutions against known analytical results in Section 7.

Remark 2.

The integral in Equation (4) in the simple LLG setting is only two-dimensional and thus may alternatively be solved by numerical quadrature in less time. However, to keep the presentation of the algorithmic techniques comparable throughout the paper, we evaluate their performance using MC integration exclusively.

In the following, we present a baseline algorithm for computing best responses, and then offer a series of improvements, each building upon the last. Runtime results for finding a -BNE are presented in Table 1. Each algorithm is run on each of our 16auction settings with 50different random seeds, and we report the average of these 800runs. This setup makes our experimental results deterministic while still capturing the effects of randomness on our algorithm’s runtime. Each run is performed single-threaded on a 2.8Ghz Intel Xeon E5-2680 v2.

Several of our techniques make a trade-off between speed and accuracy, so it is important to evaluate their effectiveness as a whole to capture how changes in accuracy affect the convergence rate of the overall algorithm. Therefore, we don’t just measure a single best response calculation in isolation, but measure the runtime of the entire algorithm from its start at the truthful strategy profile until reaching convergence. Furthermore, to avoid conflating our runtime measurements with the accuracy of the algorithm’s stopping condition, we omit the stopping criterion and instead run the algorithm for a large fixed number of iterations. Then, in an ex post facto analysis, we find the iteration and runtime at which the estimated crossed the threshold for the first time. This extrinsic stopping criterion is equivalent to the algorithm automatically knowing when it has converged, and stopping after the exact number of iterations needed. In Section 6, we will study intrinsic stopping criteria that only use information available to the algorithm while running to determine when to stop and proceed to verification.

5.1 Naive Monte Carlo Algorithm

We first present a basic algorithm where we create an evenly-spaced grid of control points over the value space. To maximize the bidder’s expected utility at each control point we use Brent search [BrentBrent1971], a commonly-used form of unconstrained optimization, and we use a naive version of Monte Carlo integration to find the expectation.

In a normalized CA, the expected utility will be zero when a bidder bids too little to win any bundle, and positive above this, with a discontinuity at the boundary. In the LLG domain, it is straightforward to find this boundary, enabling us to sample only from the positive region in our integration, effectively implementing a variant of importance sampling. Without this technique, convergence is nearly impossible at low valuations where the bidder only wins rarely. Therefore, we include it even in the most basic version of the algorithm. In addition to this, after each best response computation, we perform a dampened update, by making the current strategy a combination of the previous strategy and the best response: , for update weight . This reduces the risk of overshooting the equilibrium strategy, and thus avoids oscillations around the solution without convergence, a phenomenon typical of any procedure that iteratively searches for fixed points.

Even with these basic optimizations, the naive algorithm fails to converge to our target using

MC samples, due to high variance in the computation of the expected utility. We therefore omit the runtime data for this algorithm and use our first enhancement, presented next, as the baseline for our experiments.

Algorithm
Average
Iterations
Average Runtime
in Seconds
Speedup
Factor
Quasi-R.Num. (baseline) ItersQuasi () RuntimeQuasi () -
+ Common R. Num. () () SpeedCRandx
+ Adaptive Dampening () () SpeedAdDampx
+ Pattern Search () () SpeedPatx
+ Adaptive Control Points () () SpeedAdPtx
Table 1: Runtimes to achieve an estimated

-BNE for several algorithms, averaged over 50runs in each of our 16auction settings. Standard deviations shown in parentheses.

5.2 Quasi-Random Numbers

In Monte Carlo integration, one of the main challenges is managing the variance of the sample estimate. Any reduction in variance is always desirable, of course, but in our application this consideration is especially important. Computing an equilibrium is fundamentally a dynamic process, where the output of one iteration is fed as input into the next. When we have high variance in the expected utility computation, this causes the computed best response to deviate from the true best response in a random direction at each control point. This, in turn, makes the best response computation of the next iteration even noisier, propagating errors further down the line. On top of this, an -BNE is defined by the worst-case utility loss over all valuations of all bidders. Thus, a large error in the best response computation at any single control point of our strategy profile prevents the algorithm from converging. This can produce the counter-intuitive effect that increasing the number of control points without increasing sampling accuracy can actually decrease convergence. These factors taken together explain the bad performance of the naive algorithm.

One effective method for reducing variance is to replace standard pseudo-random numbers with quasi-random numbers in the sampling process. Quasi-random numbers are low discrepancy sequences that cover the sampled region more evenly than the same quantity of random numbers [Morokoff  CaflischMorokoff  Caflisch1995]. In our implementation we use a multi-dimensional Sobol sequence; this modification enables convergence to our target of , using 200,000samples. On average, this algorithm converges in ItersQuasi iterations and RuntimeQuasi seconds (see Table 1).

Remark 3.

The number of samples required for convergence might seem to be surprisingly high. Many of our auction instances would also converge using considerably fewer samples, but 200,000is the number required to make even the most difficult of our 16 auction settings converge in 50out of 50runs. In order make our experimental setup simple and consistent, we chose to keep number of samples equal for all LLG instances.

5.3 Common Random Numbers

In the best response computation, we repeatedly compare the expected utility of two different bids for a bidder with a given valuation. If and are the random variables representing the expected utility associated with two bids, then we want to determine if is greater or smaller than zero. Using common random numbers [Glasserman  YaoGlasserman  Yao1992], we can compute instead and get the same result with lower variance. This idea is implemented by using the same sequence of samples to compute both and . The samples used for both integrals are pairwise perfectly correlated, but still quasi-random when considering each of the integrals in isolation. Adding this technique, we get convergence to our target using only 10,000samples, i.e. 5%of the samples needed by the baseline, resulting in a SpeedCRand-fold speedup. Note that we get more than a 20-fold speedup because, in addition to saving a factor 20in the expected utility computation, this change decreases the number of function evaluations used by the Brent search and additionally makes the algorithm converge in slightly fewer iterations.

5.4 Adaptive Dampening of Strategy Updates

To obtain more consistent convergence, we employ adaptive dampening.888Note that we are not the first to use adaptive forms of dampening (see, e.g., [Fudenberg  LevineFudenberg  Levine1995, Lubin  ParkesLubin  Parkes2009]). Instead of using a constant update factor like in the baseline, we now set the weights dynamically, based on how close to a solution we expect to be:

(5)

where is a new constant. Equation 5 creates a weight between and , separately for each control point. When the utility loss of the current strategy at the control point is small, the weight is also small, resulting in a more conservative update step. This allows us to set higher than the constant weight of 0.5used before. We have found , and to work well. Adding this technique results in a cumulative SpeedAdDamp-fold speedup over the baseline.

5.5 Pattern Search

In the best response calculation, the function being maximized is an integral computed via Monte Carlo sampling, and is thus very expensive. To reduce these costs, we replace the Brent search with pattern search, an optimization procedure that requires many fewer function evaluations. Furthermore, pattern search easily scales to any number of dimensions, while Brent search only works in the one-dimensional case.

Pattern search is a type of hierarchical local search that evaluates a number of points around the best solution currently known, according to a fixed pattern. If a better solution is found, it moves the center of the pattern there and continues searching. If not, it decreases the size of the pattern and continues searching at the current point. The search normally terminates when the pattern reaches a sufficiently small scale. However, choosing the correct time to stop is not an easy problem: high precision is wasteful when we are far away from an -BNE, but is required to converge with high accuracy. We adjust the required precision adaptively to match the context by giving our search procedure a fixed budget of steps, and consume more of this budget when taking a step that moves the pattern than when taking a step that reduces the pattern size in place. This has the effect of adaptively reducing the number of steps performed by pattern search when the current bid is far from optimal. Overall, this method is much cheaper to compute than Brent search when high precision is unnecessary, and almost as accurate when it is needed. Using pattern search results in a SpeedPat-fold speedup over the baseline.

5.6 Adaptive Control Point Placement

BNE strategies often have regions of both high and low curvature, and thus using an equal spacing of control points is inefficient because it requires many unnecessary points in straight regions to have sufficient accuracy in curved regions. To avoid this, we initialize our algorithm with an evenly spaced grid of only a few control points ( in our experiments). We then repeatedly place additional points at the midpoint of those segments where the curvature of the best response function is largest. We estimate the curvature by approximating the second derivative with finite differences. This allows us to reduce the total number of control points significantly without losing accuracy, because they are spaced further apart in regions of low curvature where linear interpolation gives a better approximation. Using this method, we obtain convergence in all 16auction settings with only control points instead of the required at baseline, resulting in an overall SpeedAdPt-fold speedup.

6 Intrinsic Stopping Criterion

Algorithm
Average
Iterations
Average Runtime
in Seconds
Speedup
Factor
Naive (baseline) () () -
Naive-every-5 () () stopNaive5x
Adaptive () () stopTwoStagex
Table 2: Runtimes for algorithm with intrinsic stopping criterion, excluding the runtime of the verification step. Results are averaged over 50runs in each of our 16auction settings. Standard deviations are shown in parentheses.

In Section 5, we employed an extrinsic stopping criterion for the search phase as an experimental tool to focus on the performance of the best response computation. But when used in practice, our algorithm needs to use an intrinsic stopping criterion to determine when the target has been reached.

While the iterated best response procedure as described in Algorithm 1 provides a good basic framework, the utility loss during the search phase can only be estimated. Therefore, there is a risk that the algorithm will proceed into the verification step before actually converging to an equilibrium, and the final computed by the verification step will be higher than the intended target.

To prevent the algorithm from breaking out of the search phase too early, we employ a two-loop approach inside our algorithm. The inner loop corresponds to the standard BNE search. When this search converges, control goes into the outer loop, where a higher precision best response computation is performed. Only if the outer loop estimates to be small enough do we move into the verification step. Otherwise, we return to the inner loop. In this way, the outer loop acts as a gate between search and verification.

In practice, when choosing the precision of the outer loop, there is an application-specific trade-off between the algorithm’s runtime and the probability that the verification step will fail. In our experiments, we set the outer loop to use the same high number of control points as the verification step itself (), eschewing adaptive control points, to avoid the case where verification fails (with very high probability). The only remaining difference between the outer loop and verification step is that the latter computes best responses with higher precision, using twice as many MC samples and pattern search points.

Given that the outer loop is much more expensive than the inner loop, we must avoid running it too frequently. We tested three intrinsic stopping criteria, with runtime results shown in Table 2. For the naive algorithm, we simply go into the outer loop after every iteration of the inner loop. A better approach is to go into the outer loop less frequently. We tested going into the outer loop only every 5th iteration, which leads to a stopNaive5-fold speedup. Alternatively, we can make the transition to the outer loop adaptive, basing it on the coarser estimate of from the inner loop. To account for the lower accuracy of the inner loop, we use as the target that must be reached before breaking out of the inner loop. Furthermore, when the outer loop fails, we require at least two inner loop iterations before going into the outer loop again. This approach yields a stopTwoStage-fold improvement (see Table 2). In all three variants of our stopping criterion, the strategy profiles we find pass the verification step in each of the 800runs we perform.

7 Robustness Analysis

So far, we have shown that our algorithm works well in reaching an -BNE. We would like to have further reassurance that the behaviour of the algorithm is consistent: Even when given different random seeds as input, it should reach a similar equilibrium, both in terms of the strategy profile and of the that is reached. Furthermore, a mechanism designer may be interested in knowing the distance between and an exact BNE of the auction. Since we have chosen our test set of 16 LLG settings to ensure that a unique BNE exists and is analytically known, this distance is easy to compute. We determined the distance between our numerical BNEs and the analytical ones from [Ausubel  BaranovAusubel  Baranov2013]. The highest distance over 50runs for each setting is shown in Table 3. Over a total of 800runs, all strategies we find are within AnMax of the corresponding analytical solution, showing that, in LLG, our algorithm consistently finds an -BNE very close to the exact BNE.

Determining the variance of the among different runs is not so straightforward, because the is intrinsically computed as part of the algorithm itself. In order to deal with this, we consider the search and verification steps in isolation from each other, leading us to two separate experiments. In the first one, we run the BNE search 50times with different random seeds, but we fix the seed of the verification step. The resulting variance of is caused only by the search phase. In the second experiment, we do the inverse, fixing the random seed in the search phase (thus always leading to the same candidate equilibrium), and then running the verification step 50times with different seeds. The resulting variance of is caused only by the verification step.

Our results of these two experiments are shown in Table 3 as well. We observe that the standard deviation of is extremely small in all cases, never exceeding StdevMax. This shows that the 10,000samples we use for Monte-Carlo integration in Section 5 are more than enough to converge to the true expected utility.

Mechanism Distance to Standard Deviation of
Analytical BNE Search Verification
Nearest bid, ,
Nearest bid, ,
Nearest bid, ,
Nearest bid, ,
Proportional, ,
Proportional, ,
Proportional, ,
Proportional, ,
Proxy, ,
Proxy, , StdevMax
Proxy, , AnMax
Proxy, ,
Quadratic, ,
Quadratic, ,
Quadratic, ,
Quadratic, ,
Table 3: distance between our -BNEs and analytical results from the literature for our set of 16 auction settings, as well as the noise caused by Monte Carlo integration in the search phase and verification step, measured as the standard deviation of .

8 A Theoretical Bound on

So far, we have estimated numerically by computing the utility loss at a finite number of valuations. In this section, we show that, in some auction settings, we can derive a theoretical bound on over the entire value space, thus proving formally that a strategy profile is in fact a true -BNE. We also show that our numerical estimates for are essentially identical to the theoretical bound.

8.1 Deriving the Theoretical Bound

Our theorem requires the following three assumptions:

Assumption 1 (Quasi-linear Utilities).

Utility functions are quasi-linear, i.e.

(6)
Assumption 2 (Bounded Value Space).

The valuations are random variables with bounded support, i.e. and there exists a valuation such that .

Assumption 3 (Independently Distributed Valuations).

The valuations are mutually independent random variables.

Assumption 1 is standard in auction theory, and not very restrictive. Assumption 2 is also not very restrictive. In contrast, Assumption 3 is more restrictive. By excluding all CAs with interdependent valuations, it implies that our theorem does not apply to eight of the 16 settings we defined in Section 2.2.

As a first step, we state and prove a theorem for the upper bound in one dimension, when all bidders are single-minded (i.e. we assume that ). The general case is more technically involved, as it requires considering the topology of high-dimensional partitions of the value space. It is stated and proved as Theorem 2 in Appendix A.

Theorem 1 stated below requires the use of piecewise constant strategies. In the one-dimensional case, a piecewise constant strategy is uniquely defined by a finite set of grid points , and a bid for each valuation . The grid must cover all valuations, that is must be larger than from Assumption 2. The strategy is then extended to the entire value space: for a valuation , we have that , where is the largest grid point that is not larger than .

Theorem 1.

Let be a strategy profile of a CA with quasi-linear utilities, bounded value spaces and independently distributed valuations. If we have that each strategy is a one-dimensional piecewise constant function with gridpoints , then is an -BNE with

(7)

Before proceeding to the proof, we introduce an auxiliary lemma:

Lemma 1 (Expected utility is monotonic under fixed bids).

Let be a fixed bid of bidder . Then, is monotonically increasing in .

Proof.

Let be two valuations. When bidder bids , the distribution over other bidders’ bids is the same whether ’s value is or because the valuations are independently distributed. This implies that the distributions of the assignment and payment are also identical in both cases. Since we have quasi-linear utilities, a bid of yields a weakly higher expected utility at than at . ∎

Proof of Theorem 1.

To establish that is an -BNE, we need to show that

(8)

Consider an arbitrary bidder and valuation . Choose such that we have , and thus also . We have that

(9)
(10)
(11)
(12)
(13)
(14)

where the first inequality easily follows from (15), and the second inequality follows after applying Lemma 1 twice, to show that

and that

Application

Interestingly, Theorem 1 is constructive: we can use this result to obtain a strategy profile and a bound , together with a proof that is a true -BNE. For this, we consider an arbitrary strategy profile (a candidate BNE). We transform the strategies in such that they are piecewise constant: we take a grid over the value space and let , extending it between gridpoints to be piecewise constant. This step corresponds to line 9 of Algorithm 1 and it is crucial, since we need to make sure that our strategy profile fulfills the condition of the theorem.

Once we have the converted strategy profile , we compute a pointwise best response at each grid point , which provides the information we need to compute according to (15). There are two considerations to keep in mind regarding this process:

First of all, the theorem can be applied no matter how we come up with the candidate BNE . In this paper, we use an iterative best response algorithm, but this can be exchanged for any other equilibrium finding procedure.

Second, we are free to choose how the grid points should be spaced. The most straightforward way is to space them evenly from the minimum to the maximum possible valuation of bidder . However, looking at the structure of Theorem 1, it becomes clear that the strength of the bound depends on the difference in utility achieved (in equilibrium) between adjacent pairs of grid points. As valuations get higher, the equilibrium bids get higher and thus the probability of winning the bundle increases. Therefore, it makes sense to cluster the grid points closer together as valuations increase. We use a method described in Appendix B to calculate grid spacings that help us achieve bounds that are as tight as possible, given a fixed number of grid points. This is especially relevant for higher dimensional instances.

Approximation Guarantees of Pointwise Best Responses

One additional consideration is that in order to use Theorem 1, we would need to know the utility loss exactly, but the best response algorithm discussed in Section 5 is not guaranteed to find the global optimum999We could substitute a best response algorithm with global convergence guarantees as used in [Vorobeychik  WellmanVorobeychik  Wellman2008], though it should be noted that even such an algorithm is not guaranteed to produce an exact best response in finite computation time., and thus we can only approximate . Fortunately, small errors do not get amplified by our theorem; instead they only slightly affect the quality of our equilibria: If our best response algorithm were to always return a bid that achieves a utility within of the highest utility possible, then we could extend the analysis of the theorem to prove that we achieve a -BNE. Next, we give a special case where we can bound this . In general, such a bound cannot be obtained, but we expect the error term to be negligible compared to .

Consider a single-item first price auction. For a fixed valuation , it is clear that when bidder increases his bid, his probability of winning the item increases monotonically, but so does his expected payment. When the utility is quasi-linear, these two factors directly weigh against each other:

It follows that the bid maximizing the expected utility is the one that optimally balances the contribution of both terms.101010In this simple setting, could of course be found analytically, but our proposition below applies to other settings as well. In order to numerically find a bid very close in utility to , we exploit this decomposition of the utility into two monotonic terms. If we have two bids and , then a bid in between them cannot have a higher probability of winning than at , and cannot have a lower expected payment than at . Combining these two facts, we get an upper bound of how much utility can be gained for any bid . To make this argument, we made use of two properties of the auction, namely that each bidder bids on a single bundle (i.e. ), and that the payment monotonically increases with the bid.

Proposition 1.

Let be a strategy profile of a CA with quasi-linear utilities, single-minded bidders and a payment rule that is monotonically increasing in . Fix and consider a grid of bids , with being high enough to ensure that

where is bidder ’s bundle of interest. Then, we have that for all bids ,

Proof.

See Appendix C

It is possible to state similar results in settings with multi-minded bidders as well. However, this is technically much more challenging: if bidder increases his bid on bundle , his probability of winning actually decreases for every other bundle he bids on. Furthermore, the concept of monotonicity is harder to pin down when outcomes for bidders are more complex than a binary win/lose. We leave a full exploration of this type of result to future work.

8.2 The Bound on vs. the Estimated

In Section 4.2, we estimated the by computing the utility loss on finitely many verification points. With Theorem 1 at our disposal, we can now check how tight this lower bound really is. For this, we consider the 8 of our original 16 auction settings that are subject to Theorem 1, namely those with independently distributed valuations (i.e. ). We take the strategy profiles obtained as described in Sections 5 and 6 and compare the lower bound from Section 4.2 to the upper bound given by Theorem 1, using the same number of verification points in both cases.

Figure 1: Lower and upper bound for of the -BNEs obtained by our algorithm (average over eight different auction settings). The lower bound is computed as described in Section 4.2, while the upper bound is given by Theorem 1.

The result is shown in Figure 1 as a log-log plot, with the number of verification points varying from to , and the upper and lower bounds being the average over the 8 auction settings. We observe that the lower bound remains practically constant, while the upper bound converges towards it. At verification points, the theoretical bound guarantees an that is below our target of . The results are qualitatively the same when considering each auction separately. This illustrates how attractive our estimated (i.e. the lower bound) really is: It is highly accurate and only uses a tiny fraction of the verification points that the upper bound requires.

9 The Multi-Minded LLLLGG Domain

Bidder Bundle 1 Bundle 2
AB BC
CD DE
EF FG
GH HA
ABCD EFGH
CDEF GHAB
Table 4: The Multi-Minded LLLLGG domain has 8 goods and 6 bidders. Each bidder is interested in exactly two bundles.

We next introduce the new Multi-Minded LLLLGG domain, which represents a significant increase in complexity over LLG. The domain has 8 goods , and 6 bidders, each of which is interested in two bundles, as enumerated in Table 4. Each local bidder draws its two bundle values from , while the global bidders and draw their two bundle values from ; all draws are independent. Because the domain exhibits significant symmetries, we can search for symmetric equilibria where all local bidders play one strategy and both global bidders play another. However, unlike in LLG, these strategies are two-dimensional. Thus, the strategy profile is described by a pair of strategies and . Even though these symmetries can be exploited to reduce the dimensionality of the problem, computing the expected utility of a given report remains very expensive: the random variable has 10 dimensions, i.e. 2 bundle values for each of 5 other bidders. Solving an integral of this dimensionality is only amenable to our MC approaches.

[width=0.9trim=0 1.3in 0 1.2in,clip]figs/allplot-firstprice.png

(a) First Price payment rule.
Figure 2: Equilibria of the Multi-Minded LLLLGG domain. For each payment rule, the top two plots show the BNE strategies for the local bidders for their two bundles of interest, and the bottom two show the same for the global bidders.
(b) Quadratic payment rule.

We apply our algorithm to find an -BNE for two payment rules in this domain: quadratic and first price. We use a grid of control points for the inner loop and a grid for the outer loop. For each iteration, we update all points in the grid as described in Section 5, using 100,000Monte Carlo samples to compute the expected utility. For the verification step, we use grids of points for the local bidders and points for the global bidders.111111The global bidders have a maximum valuation twice as high as the local ones and thus require a finer grid to achieve our desired Furthermore, we increase the number of samples to 200,000.

Given this setup, our goal is to find good equilibria and then use the higher-dimensional version of Theorem 1 from Section 8 to give upper bounds on . For the first price rule, our BNE algorithm finishes in iterations, taking a total of core-hours including verification. Computing an equilibrium for the quadratic rule requires only iterations, because there is an equilibrium much closer to truthful bidding. However, reaching this equilibrium takes significantly more time: our algorithm requires a total of core-hours.121212 At this domain complexity, it is to be expected that the quadratic rule is orders of magnitude slower to evaluate than first price, even employing constraint generation [Day  RaghavanDay  Raghavan2007, Bünz, Seuken,  LubinBünz et al.2015] to find prices satisfying all core constraints. The equilibria for both these payment rules are proven to be -BNEs. Considering only the utility loss directly at the control points of the verification grids, we compute lower bounds on of and for first price resp. quadratic. The final strategy profiles found for both rules are depicted in Figure 1(a). Note that to enhance visual clarity, we show the equilibrium strategies in their piecewise linear form, before being converted to piecewise constant, as is required for our upper bound (see Section 8).

Overbidding in Equilibrium

These BNE calculations, in particular the BNE for the quadratic rule, are the first of their kind in such a complex domain with such rich strategies. Some interesting observations can be made about the resulting equilibria. In Figure 1(a), we observe that under the quadratic rule, global bidders overbid on modestly valued bundles when their other bundle has a very high value. This manipulation makes sense as it shifts the VCG reference point: Suppose that bidders , and win bundles , and respectively. Bidder has an incentive to overstate the value for : this will cause the VCG reference point to increase for and . If the value for is very high for , then this manipulation carries little risk of accidentally winning the low-valued bundle, and it decreases the expected payment for . Such a causal chain makes intuitive sense, and one could easily speculate about the overbidding behaviour of bidder , or even analyze it in a full information setting. But it was not clear, until now, that such behaviour persists even in Bayesian Nash equilibrium.

10 Software Implementation of our BNE Solver

In this section, we describe the software implementation of our BNE algorithm, which we released publicly under an open-source license at https://github.com/UZH-CERG/CA-BNE

. The algorithm is implemented in Java 8, and it was written with performance, ease of use, and extensibility in mind. These three goals are sometimes at odds with each other, but we’ve attempted to do justice to each of them, applying best practices of software engineering. The full code can be found in the repository, illustrated with three examples: the quadratic rule on

LLG, the first price rule on LLG (which is more challenging since the global bidder is no longer truthful), and the first price rule on Multi-Minded LLLLGG (with a more modest default configuration that can be run on a laptop). We only give a high-level overview here, to point out the design choices that we have made.

The algorithm is set up in a fundamentally modular way: all of its pieces, such as the pointwise best response calculator, the update rule, and so on are implemented by separate classes. This makes it easy to e.g. swap out Brent search for pattern search, which are just two different implementations of the same interface. There is one class, BNESolverContext, that is in charge of coordination and is given pointers to instances of all the algorithm pieces. When one piece needs to make use of another (e.g. when pattern search needs to know the expected utility for a certain bid) it accesses it only through this context.131313This design pattern ensures that the algorithm pieces are only loosely coupled, which reduces the incidence of bugs and preserves configurability and extensibility. It is known as the strategy pattern [GammaGamma1995] or more broadly as the component pattern [NystromNystrom2014]. An example of creating and configuring the context is shown in Algorithm 2.

The auction instance itself is described by two classes: A BidSampler class that creates a conditional distribution of bids , given a strategy profile and a valuation , and a Mechanism class that maps a bid profile to the utility of a single bidder . At first glance, it might not be clear why we have chosen to split up the code in this way. It seems like we are violating the principle of modularity by mixing together unrelated matters, i.e. the auction’s payment rule and the assumption of quasi-linear utilities for bidders. However, from the algorithm’s point of view, one task is to generate samples from a distribution, and another task is to compute the expectation of a function of those samples.141414With the samples being bid profiles drawn from , and being bidder ’s utility . We could of course explicitly model the allocation rule, payment rule, and utility functions. However, this would greatly impact performance, because many short-lived objects representing intermediate results (allocations, payment vectors, etc) would be created and destroyed every time we evaluated . Our code is structured in a way that this explicit modelling is possible to do (when code clarity and flexibility are the main concerns) but can be avoided when high performance is required. The BNE algorithm itself is implemented in a class that is initialized with the context and a starting strategy profile.

1 BNESolverContext<Double, Double> context = new BNESolverContext<>();
2 String configfile = args[0];
3 context.parseConfig(configfile);
4 context.setOptimizer(new PatternSearch<>(context, new UnivariatePattern()));
5 context.setIntegrator(new MCIntegrator<>(context));
6 context.setRng(2, new CommonRandomGenerator(2));
7 context.setUpdateRule(new UnivariateDampenedUpdateRule(0.2, 0.7, 0.5 / context.getDoubleParameter("epsilon"), true));
8 context.setBRC(new AdaptivePWLBRCalculator(context));
9 context.setOuterBRC(new PWLBRCalculator(context));
10 context.setVerifier(new BoundingVerifier1D(context));
Algorithm 2 Sample code for creating and configuring the context that manages the program’s configuration.

10.1 Generality and Extensions

Our software architecture is very flexible, and can be easily configured and extended. For example, our software effortlessly supports arbitrary joint distributions of valuations, which cannot be handled by the analytical methods in AusubelBaranov2018core AusubelBaranov2018core and Goeree2013OnTheImpossibilityOfCoreSelectingAuctions Goeree2013OnTheImpossibilityOfCoreSelectingAuctions or by the numerical methods in Rabinovich2013ComputingBNEs Rabinovich2013ComputingBNEs . A natural way to model such joint distributions is via

copulae [SklarSklar1959, Lubin, Bünz,  SeukenLubin et al.2017]. We can also easily capture settings where bidder utility depends on the allocation of other bidders, a situation that is natural in the important application of spectrum auctions, where the market positions resulting from the auction must be taken into account. Another direction we are already pursuing is to study the effect of bidders overbidding, as identified by beck2013incentives beck2013incentives . To include the possibility of overbidding in the strategy space, the number of bundles of interest must be increased, which makes finding BNEs more challenging. However, the inclusion of overbidding in LLG is still less complex than the new Multi-Minded LLLLGG domain we presented in Section 9. We also highlight the study of asymmetric equilibria which has largely been absent from prior work due to its complexity, but which can be directly studied using our approach: all that is required is to track all bidders separately and to distinctly initialize their strategies. The former adds only linearly more computational effort, and the latter introduces a requirement to run the algorithm multiple times with random restarts.

Most important of all, extending our code to novel settings only requires implementing a few additional classes, allowing for easy exploratory research and hypothesis testing. This might turn out to be the most significant contribution of the present work.

11 Conclusion

In this paper, we have introduced a fast, general algorithm for finding -BNEs in CAs with continuous values and actions. In contrast to prior work, we address the issues that arise when working with continuous spaces head-on. We compute best responses in the full action space, and we ensure that the utility loss is small at all possible valuations. For the special case where bidders have independently distributed valuations, we can formally bound the utility loss. These ideas taken together allow us to avoid the false precision problem. Our approach manages to be very fast while still fulfilling the desideratum of high precision. We achieve this by splitting our algorithm into a search phase and a verification step, and carefully implementing many optimizations.

We have first verified the accuracy of the resulting algorithm in the well-known LLG domain, where analytical benchmark results are known. We have then shown the power of our algorithm by providing the first -BNE with expressive strategies in a novel, large CA domain with six goods and eight multi-minded bidders.

We release our code to the public under an open source license, so that it may be used by other researchers. Our code follows sound software engineering principles, and is thus easy to extend to novel combinatorial auction settings. We believe this will facilitate the study of many mechanisms and domains that were not previously amenable to analytic or algorithmic analysis.

Appendix A Generalization of Theorem 1

Here we present a version of Theorem 1 in the general (multi-minded) case. To state the theorem precisely, we first need to develop some notation.

Definition 2 (Cell).

A cell is a half-open axis-aligned -dimensional hyperrectangle with lower corner and upper corner , i.e.

For convenience, we consider to be the singleton , instead of .

Definition 3 (Cell Partition).

A cell partition of a bounded value space is a set of cells such that each point in the value space falls into exactly one cell, i.e.

See Figure 3 for some examples of cell partitions on the unit cube. Note that the upper boundary of the cube needs to be covered by cells as well, which is why the partitions include not only 2-dimensional rectangles, but also 1-dimensional lines and even a 0-dimensional point.

Figure 3: Three ways of partitioning the 2-dimensional unit cube. Individual cells are shown on the bottom, including lower-dimensional boundary cells. Left: regular grid. Center: non-uniform grid. Right: partition not based on a grid.
Definition 4 (Piecewise Constant Strategy).

Let be a cell partition. A strategy is piecewise constant on if every value has the same bid as the lower corner of the cell it belongs to, i.e.

Theorem 2.

Let be a strategy profile of a CA with quasi-linear utilities, bounded value spaces and independently distributed valuations. If we have that each strategy is piecewise constant on some cell partition , then is an -BNE with

(15)
Proof.

Analogous to the proof of Theorem 1, making use of the fact that Lemma 1 also holds in multi-minded settings. ∎

Appendix B Grid Spacing Algorithm

Here we explain details and parametrization of the non-uniform grid spacing algorithm that we use to get tighter bounds on out of Theorem 1. Recall that in order to apply the theorem, the equilibrium strategy profile needs to consist of piecewise constant strategies. If those strategies are created with control points on a regularly spaced grid, then at high valuations there is a higher difference in utility between adjacent grid points. Our goal is to take any strategy and create a piecewise constant strategy , such that the difference in utility is roughly even for all pairs of adjacent grid points.

For one-dimensional strategies, we could use an iterative algorithm that places grid points one at a time, trying to space them as evenly as possible: the algorithm would place the first two points at the lower and the higher end of the value space, then recursively place a point in the middle of the interval with the highest spread of utilities. While such an algorithm would clearly yield good results, it doesn’t generalize well to higher dimensions, especially if we add the constraint that the resulting strategy should be fast to evaluate (i.e. it should eschew complex geometric data structures for point location). We devised a method that is very straightforward and easily generalizes to higher dimensions, while yielding most of the benefit of the above algorithm.

We first show the algorithm for one-dimensional strategies. Let be the highest possible valuation, and let be the number of grid points should have. We introduce a parameter , with the idea being that the two highest grid points are closer together than on a regularly spaced grid, by a factor . The remaining grid points are then placed by fitting a quadratic polynomial to the following constraints:

Note that maps valuations to the index of the interval they fall in. I.e. if , then falls between grid points and . We can now define as follows:

In words, we apply to find the interval falls in, then round down and apply the inverse of , which recovers grid point . The bid at is defined to be the same as the bid at .

The idea can be easily extended to higher dimensions by applying the same algorithm to each dimension. This results in a “separable” grid where the point location problem can be solved very efficiently (see Figure 3, center). Note that we can set to recover an evenly spaced grid. In practice we used for LLG, and for Multi-Minded LLLLGG. These values make the optimal tradeoff: any lower, and the error at the lowest intervals starts exceeding the error at the highest intervals.

It’s straightforward to calculate how much speedup this technique yields: a regularly spaced grid that produces an equally good bound at high valuations would need times more control points in each dimension. The speedup for e.g. Multi-Minded LLLLGG is therefore .

Appendix C Proof of Proposition 1

Proof.

We make a case split over .

Case 1: . Let be the largest integer such that . It follows that

The first inequality follows from the monotonicity of the allocation rule (increasing while keeping fixed increases the probability that bidder wins their bundle of interest)