1 Introduction
Incentive compatibility (Hurwicz, 1972) is a fundamental concept in mechanism design. Under an incentive compatible mechanism, it is in every agent’s best interest to report their type truthfully. Nonetheless, practitioners have long employed mechanisms that are not incentive compatible, also called manipulable mechanisms. This is the case in many settings for selling, buying, matching (such as school choice), voting, and so on. For example, most realworld auctions are implemented using the firstprice mechanism. In multiunit sales settings, the U.S. Treasury has used discriminatory auctions, a variant of the firstprice auction, to sell treasury bills since 1929 (Krishna, 2002). Similarly, electricity generators in the U.K. use discriminatory auctions to sell their output (Krishna, 2002). Sponsored search auctions are typically implemented using variants of the generalized secondprice auction (Edelman et al., 2007; Varian, 2007). In the past year, many major display ad exchanges including AppNexus, Rubicon, and Index Exchange have transitioned to firstprice auctions, driven by ad buyers who believe it offers a higher degree of transparency (Parkin, 2018; Harada, 2018). Finally, nearly all fielded combinatorial auctions are manipulable. Essentially all combinatorial sourcing auctions are implemented using the firstprice mechanism (Sandholm, 2013). Combinatorial spectrum auctions are conducted using a variety of manipulable mechanisms. Even the “incentive auction” used to source spectrum licenses back from lowvalue broadcasters—which has sometimes been hailed as obviously incentive compatible—is manipulable once one takes into account the fact that many owners own multiple broadcasting stations, or the fact that stations do not only have the option to keep or relinquish their license, but also the option to move to (two) less desirable spectrum ranges (Nguyen and Sandholm, 2015).
Many reasons have been suggested why manipulable mechanisms are used in practice. First, the rules are often easier to explain. Second, incentive compatibility ceases to hold even in the relatively simple context of the Vickrey auction when determining one’s own valuation is costly (for example, due to computation or information gathering effort) (Sandholm, 2000). Third, bidders may have even more incentive to behave strategically when they can conduct computation or information gathering on each others’ valuations, and if they can incrementally decide how to allocate valuationdetermination effort (Larson and Sandholm, 2001, 2005). Fourth, in combinatorial settings, wellknown incentivecompatible mechanisms such as the VickreyClarkeGroves (VCG) mechanism require bidders to submit bids for every bundle, which generally requires a prohibitive amount of valuation computation (solving a local planning problem for potentially every bundle, and each planning problem, itself, can be NPcomplete) or information acquisition (Sandholm, 1993; Parkes, 1999; Conen and Sandholm, 2001; Sandholm and Boutilier, 2006). Fifth, in settings such as sourcing, singleshot incentive compatible mechanisms such as the VCG are generally not incentive compatible when the bidtaker uses bids from one auction to adjust the parameters of later auctions in later years (Sandholm, 2013). Sixth, incentive compatible mechanisms may leak the agents’ sensitive private information (Rothkopf et al., 1990). Seventh, incentive compatibility typically ceases to hold if agents are not risk neutral (i.e., the utility functions are not quasilinear). There are also sound theoretical reasons why the designer sometimes prefers manipulable mechanisms. Specifically, there exist settings where the designer does better than under any incentive compatible mechanism if the agents cannot solve hard computational or communication problems, and equally well if they can (Conitzer and Sandholm, 2004; Othman and Sandholm, 2009).
Due in part to the ubiquity of manipulable mechanisms, a growing body of additional research has explored mechanisms that are not incentive compatible (Kothari et al., 2003; Archer et al., 2004; Conitzer and Sandholm, 2007; Dekel et al., 2010; Lubin and Parkes, 2012; Mennle and Seuken, 2014; Dütting et al., 2015; Azevedo and Budish, 2018; Feng et al., 2018; Golowich et al., 2018; Dütting et al., 2017). A popular and widelystudied relaxation of incentive compatibility is incentive compatibility (Kothari et al., 2003; Archer et al., 2004; Dekel et al., 2010; Lubin and Parkes, 2012; Mennle and Seuken, 2014; Dütting et al., 2015; Azevedo and Budish, 2018), which requires that no agent can improve his utility by more than when he misreports his type.
1.1 Our contributions
Much of the literature on incentive compatibility rests on the strong assumption that the agents’ type distribution is known. In reality, this information is rarely available. We relax this assumption and instead assume we only have samples from the distribution (Likhodedov and Sandholm, 2004, 2005; Sandholm and Likhodedov, 2015). We present techniques with provable guarantees that the mechanism designer can use to estimate how far a mechanism is from incentive compatible. We analyze both the exinterim and exante^{1}^{1}1We do not study expost approximate incentive compatibility because it is a worstcase, distributionindependent notion. Therefore, we cannot hope to measure the expost approximation factor using samples from the agents’ type distribution. settings: in the exinterim case, we bound the amount any agent can improve his utility by misreporting his type, in expectation over the other agents’ types, no matter his true type. In the weaker exante setting, the expectation is also over the agent’s true type as well.
Our estimate is simple: it measures the maximum utility an agent can gain by misreporting his type on average over the samples, whenever his true and reported types are from a finite subset of the type space. We bound the difference between our incentive compatibility estimate and the true incentive compatibility approximation factor . We are the first paper to provide theoretical guarantees for estimating approximate incentive compatibility from the mechanism designer’s perspective, to our knowledge. In settings where we can solve for the true approximation factor , we provide experiments demonstrating that our estimates quickly converge to .
We apply our estimation technique to a variety of auction classes. We begin with the firstprice auction, in both singleitem and combinatorial settings. Our guarantees can be used by display ad exchanges, for instance, to measure the extent to which incentive compatibility will be compromised if the exchange transitions to using firstprice auctions (Parkin, 2018; Harada, 2018). In the singleitem setting, we prove that the difference between our estimate and the true incentive compatibility approximation factor is , where is the number of bidders, is the number of samples, and contains the range of the density functions defining agents’ type distribution. We prove the same bound for the secondprice auction with spiteful bidders (Brandt et al., 2007; Morgan et al., 2003; Sharma and Sandholm, 2010; Tang and Sandholm, 2012), where each bidder’s utility not only increases when his surplus increases but also decreases when the other bidders’ surpluses increase.
In a similar direction, we analyze the class of generalized secondprice auctions (Edelman et al., 2007), where sponsored search slots are for sale. The mechanism designer assigns a realvalued weight per bidder, collects a bid per bidder indicating their value per click, and allocates the slots in order of the bidders’ weighted bids. In this setting, we prove that the difference between our incentive compatibility estimate and the true incentive compatibility approximation bound is .
We also analyze multiparameter mechanisms beyond the firstprice combinatorial auction, namely, the uniformprice and discriminatory auctions, which are used extensively in markets around the world (Krishna, 2002). In both, the auctioneer has identical units of a single good to sell. Each bidder submits bids indicating their value for each additional unit of the good. The number of goods the auctioneer allocates to each bidder equals the number of bids that bidder has in the top bids. In both cases, we prove that the difference between our incentive compatibility estimate and the true incentive compatibility approximation bound is .
A strength of our estimation techniques is that they are applicationagnostic. For example, they can be used as a tool in incremental mechanism design (Conitzer and Sandholm, 2007), a subfield of automated mechanism design (Conitzer and Sandholm, 2002; Sandholm, 2003)
, where the mechanism designer gradually adds incentive compatibility constraints to her optimization problem until she has met a desired incentive compatibility guarantee. One line of work in the spirit of incremental mechanism design has studied mechanism design via deep learning
(Feng et al., 2018; Dütting et al., 2017; Golowich et al., 2018). The learning algorithm receives samples from the distribution over agents’ types. The resulting allocation and payment functions are characterized by neural networks, and thus the corresponding mechanism may not be incentive compatible. In an attempt to make these mechanisms nearly incentive compatible, the authors of these works add constraints to the deep learning optimization problem enforcing that the resulting mechanism be incentive compatible over a set of buyer values sampled from the underlying, unknown distribution. However, they provide no guarantees indicating how far the resulting mechanism is from incentive compatible. One of the goals of this paper is to provide techniques for relating a mechanism’s empirical incentive compatibility approximation factor to its expected incentive compatibility approximation factor.
Key challenges.
To prove our guarantees, we must estimate the value defined such that no agent can misreport his type in order to improve his expected utility by more than , no matter his true type. We propose estimating by measuring the extent to which an agent can improve his utility by misreporting his type on average over the samples, whenever his true and reported types are from a finite subset^{2}^{2}2This is the approach also taken in mechanism design via deep learning (Dütting et al., 2017; Feng et al., 2018; Golowich et al., 2018), but without guarantees. of the type space. We denote this estimate as . The challenge is that by searching over a subset of the type space, we might miss pairs of types and where an agent with type can greatly improve his expected utility by misreporting his type as . Indeed, utility functions are often volatile in mechanism design settings. For example, under the first and secondprice auctions, nudging an agent’s bid from below the other agents’ largest bid to above will change the allocation, causing a jump in utility. Thus, there are two questions we must address: which finite subset should we search over and how do we relate to ?
We provide two approaches to constructing the cover . The first is to run a greedy procedure based off a classic algorithm from learning theory. This approach is extremely versatile: it provides strong guarantees no matter the setting. However, depending on the domain, it may be difficult to implement. Meanwhile, implementing our second approach is straightforward: the cover is simply a uniform grid over the type space (assuming the type space equals for some integer ). The efficacy of this approach depends on a “niceness” property that holds under mild assumptions. To analyze this second approach, we must understand how the edgelength of the grid effects our error bound relating to . To do so, we rely on the notion of dispersion, introduced by Balcan et al. (2018a) in the context of online and batch learning as well as private optimization. Roughly speaking, a set of piecewise Lipschitz functions is dispersed if every ball of radius in the domain contains at most of the functions’ discontinuities. Given a set of samples from the distribution over agents’ types, we analyze the set of functions measuring the utility of an agent with type and misreported type when the other agents’ true and misreported types are represented by one of the samples. We show that if these functions are dispersed, we can use a grid with edgelength to discretize the agents’ type space. We then prove that if the intrinsic complexity of the agents’ utility functions are not too large (as measured by the learningtheoretic notion of pseudodimension (Pollard, 1984)), then quickly converges to as the number of samples grows.
Finally, we show that for a wide range of mechanism classes, dispersion holds under mild assumptions. As we describe in Section 3.1.2
, this requires us to prove that with high probability, each function sequence from an infinite family of sequences is dispersed. This facet of our analysis is notably different from prior research by
Balcan et al. (2018a): in their applications, it is enough to show that with high probability, a single, finite sequence of functions is dispersed. Our proofs thus necessitate that we carefully examine the structure of the utility functions that we analyze.1.2 Additional related research
Sample complexity of revenue maximization.
A long line of research has studied revenue maximization via machine learning from a theoretical perspective
(Balcan et al., 2008; Alon et al., 2017; Elkind, 2007; Cole and Roughgarden, 2014; Huang et al., 2015; Medina and Mohri, 2014; Morgenstern and Roughgarden, 2015; Roughgarden and Schrijvers, 2016; Devanur et al., 2016; Gonczarowski and Nisan, 2017; Bubeck et al., 2017; Morgenstern and Roughgarden, 2016; Balcan et al., 2016; Syrgkanis, 2017; Medina and Vassilvitskii, 2017; Balcan et al., 2018b; Gonczarowski and Weinberg, 2018; Cai and Daskalakis, 2017). The mechanism designer receives samples from the type distribution which she uses to find a mechanism that is, ideally, nearly optimal in expectation. That research has only studied incentive compatible mechanism classes. Moreover, in this paper, it is not enough to provide generalization guarantees; we must both compute our estimate of the incentive compatibility approximation factor and bound our estimate’s error. This type of error has nothing to do with revenue functions, but rather utility functions. These factors in conjunction mean that our research differs significantly from prior research on generalization guarantees in mechanism design.In a related direction, Chawla et al. (2014, 2016, 2017) study counterfactual revenue estimation. Given two auctions, they provide techniques for estimating one of the auction’s equilibrium revenue from the other auction’s equilibrium bids. They also study social welfare in this context. Thus, their research is tied to selling mechanisms, whereas we study more general mechanism design problems from an applicationagnostic perspective.
Strategyproofness in the large.
Azevedo and Budish (2018) propose a variation on approximate incentive compatibility called strategyproofness in the large (SPL). SPL requires that it is approximately optimal for agents to report their types truthfully in sufficiently large markets. As in our paper, SPL is a condition on exinterim incentive compatibility. The authors argue that SPL approximates, in large markets, attractive properties of a mechanism such as strategic simplicity and robustness. They categorize a number of mechanisms as either SPL or not. For example, they show that the discriminatory auction is manipulable in the large whereas the uniformprice auction is SPL. Measuring a mechanism’s SPL approximation factor requires knowledge of the distribution over agents’ types, whereas we only require sample access to this distribution. Moreover, we do not make any largemarket assumptions: our guarantees hold regardless of the number of agents.
Comparing mechanisms by their vulnerability to manipulation.
Pathak and Sönmez (2013) analyze expost incentive compatibility without any connection to approximate incentive compatibility. They say that one mechanism is at least as manipulable as another if every type profile that is vulnerable to manipulation under is also vulnerable to manipulation under . They apply their formalism in the context of school assignment mechanisms, the uniformprice auction, the discriminatory auction, and several keyword auctions. We do not study expost approximate incentive compatibility because it is a worstcase, distributionindependent notion. Therefore, we cannot hope to measure an expost approximation factor using samples from the agents’ type distribution. Rather, we are concerned with providing datadependent bounds on the exinterim and exante approximation factors. Another major difference is that our work provides quantitative results on manipulability while theirs provides boolean comparisons as to the relative manipulability of mechanisms. Finally, our measure applies to all mechanisms while theirs cannot rank all mechanisms because in many settings, pairs of mechanisms are incomparable according to their boolean measure.
Incentive compatibility from a buyer’s perspective.
Lahaie et al. (2018) also provide tools for estimating approximate incentive compatibility, but from the buyer’s perspective rather than the mechanism designer’s perspective. As such, the type of information available to their estimation tools versus ours is different. Moreover, they focus on ad auctions, whereas we study mechanism design in general and apply our techniques to a wide range of settings and mechanisms.
2 Preliminaries and notation
There are agents who each have a type. We denote agent ’s type as , which is an element of a (potentially infinite) set . A mechanism takes as input the agents’ reported types, which it uses to choose an outcome. We denote agent ’s reported type as . We denote all agents’ types as and reported types as . For , we use the standard notation to denote all agents’ types except agent . Using this notation, we denote the type profile representing all agents’ types as . Similarly, . We assume there is a distribution over all agents’ types, and thus the support of is contained in . We use to denote the conditional distribution given , so the support of is contained in . We assume that we can draw samples independently from . This is the same assumption made in a long line of work on mechanism design via machine learning (Balcan et al., 2008; Alon et al., 2017; Elkind, 2007; Cole and Roughgarden, 2014; Huang et al., 2015; Medina and Mohri, 2014; Morgenstern and Roughgarden, 2015; Roughgarden and Schrijvers, 2016; Devanur et al., 2016; Gonczarowski and Nisan, 2017; Bubeck et al., 2017; Morgenstern and Roughgarden, 2016; Balcan et al., 2016; Syrgkanis, 2017; Medina and Vassilvitskii, 2017; Balcan et al., 2018b; Gonczarowski and Weinberg, 2018; Cai and Daskalakis, 2017; Dütting et al., 2017; Feng et al., 2018; Golowich et al., 2018; Likhodedov and Sandholm, 2004, 2005; Sandholm and Likhodedov, 2015).
Given a mechanism and agent , we use the notation to denote the utility agent receives when the agents have types and reported types . We assume it maps to . When , we use the simplified notation .
At a high level, a mechanism is incentive compatible if no agent can ever increase her utility by misreporting her type. A mechanism is incentive compatible if each agent can increase her utility by an additive factor of at most by misreporting her type (Kothari et al., 2003; Archer et al., 2004; Conitzer and Sandholm, 2007; Dekel et al., 2010; Lubin and Parkes, 2012; Mennle and Seuken, 2014; Dütting et al., 2015; Azevedo and Budish, 2018; Feng et al., 2018; Golowich et al., 2018; Dütting et al., 2017). In the main body, we concentrate on exinterim approximate incentive compatibility (Azevedo and Budish, 2018; Lubin and Parkes, 2012). A mechanism is exinterim incentive compatible if for each and all , agent with type can increase her expected utility by an additive factor of at most by reporting her type as , so long as the other agents report truthfully. In other words,
In Appendix F, we study exante approximate incentive compatibility, where the above definition holds in expectation over .
3 Estimating approximate exinterim incentive compatibility
In this section, we show how to estimate the exinterim incentive compatibility approximation guarantee using data. We assume there is an unknown distribution over agents’ types, and we operate under the common assumption (Lubin and Parkes, 2012; Azevedo and Budish, 2018; Cai and Daskalakis, 2017; Yao, 2014; Cai et al., 2016; Goldner and Karlin, 2016; Babaioff et al., 2017; Hart and Nisan, 2012) that the agents’ types are independently distributed. In other words, for each agent , there exists a distribution over such that . (In Appendix F, we extend our analysis to approximate exante incentive compatibility under no assumption about the underlying distribution, which is a weaker notion of incentive compatibility, but also a weaker assumption on the distribution.) For each agent , we receive a set of samples independently drawn from . For each mechanism , we show how to use the samples to estimate a value such that:
With probability over the draw of the sets of samples , for any agent and all pairs ,
To this end, one simple approach, informally, is to estimate by measuring the extent to which any agent with any type can improve his utility by misreporting his type, averaged over all profiles in . In other words, we can estimate by solving the following optimization problem:
(1) 
Unfortunately, in full generality, there might not be a finitetime procedure to solve this optimization problem, so in Section 3.1, we propose more nuanced approaches based on optimizing over finite subsets of . As a warmup and a building block for our main theorems in that section, we prove that with probability , for all mechanisms ,
(2) 
where is an error term that converges to zero as the number of samples grows. Its convergence rate depends on the intrinsic complexity of the utility functions corresponding to the mechanisms in , which we formalize using the learningtheoretic tool pseudodimension. We define pseudodimension below for an abstract class of functions mapping a domain to .
[Pseudodimension (Pollard, 1984)] Let be a set of elements from the domain of and let be a set of targets. We say that witness the shattering of by if for all subsets , there exists some function such that for all , and for all ,
. If there exists some vector
that witnesses the shattering of by , then we say that is shatterable by . Finally, the pseudodimension of , denoted , is the size of the largest set that is shatterable by . Theorem 3 provides an abstract generalization bound in terms of pseudodimension. [Pollard (1984)] Let be a distribution over . With probability over , for all , where .We now use Theorem 3 to prove that the error term in Equation (2) converges to zero as increases. To this end, for any mechanism , any agent , and any pair of types , let be a function that maps the types of the other agents to the utility of agent with type and reported type when the other agents report their types truthfully. In other words, . Let be the set of all such functions defined by mechanisms from the class . In other words, . We now analyze the convergence rate of the error term . The full proof is in Appendix C.
[] With probability over the draw of the sets , for all mechanisms and agents ,
where and .
Proof sketch.
Fix an arbitrary bidder . By Theorem 3, we know that with probability , for every pair of types and every mechanism , the expected utility of agent with type and reported type (in expectation over ) is close to his average utility (averaged over ), where We use this fact to show that when agent misreports his type, he cannot increase his expected utility by more than a factor of . ∎
3.1 Incentive compatibility guarantees via finite covers
In the previous section, we presented an empirical estimate of the exinterim incentive compatibility approximation factor (Equation (1)) and we showed that it quickly converges to the true approximation factor. However, there may not be a finitetime procedure for computing Equation (1) in its full generality, restated below:
(3) 
In this section, we address that challenge. A simple alternative approach is to fix a finite cover of , which we denote as , and approximate Equation (3) by measuring the extent to which any agent can improve his utility by misreporting his type when his true and reported types are elements of the cover , averaged over all profiles in . In other words, we estimate Equation (3) as:
(4) 
This is the approach also taken in the recent line of research on mechanism design via deep learning (Dütting et al., 2017; Feng et al., 2018; Golowich et al., 2018). This raises two natural questions: how do we select the cover and how close are the optimal solutions to Equations (3) and (4)? We provide two simple, intuitive approaches to selecting the cover . The first is to run a greedy procedure (see Section 3.1.1) and the second is to create a uniform grid over the type space (assuming for some integer ; see Section 3.1.2).
3.1.1 Covering via a greedy procedure
In this section, we show how to construct the cover of greedily, based off a classic learningtheoretic algorithm. We then show that when we use the cover to estimate the incentive compatibility approximation factor (via Equation (4)), the estimate quickly converges to the true approximation factor. This greedy procedure is summarized by Algorithm 1.
For any , we use the notation . To simplify notation, let be the set of vectors defined in Algorithm 1:
Note that the solution to Equation (3) equals . The algorithm greedily selects a set of vectors , or equivalently, a set of type pairs as follows: while is nonempty, it chooses an arbitrary vector in the set, adds it to , and adds the pair defining the vector to . Classic results from learning theory (Anthony and Bartlett, 2009) guarantee that this greedy procedure will repeat for at most iterations, where .
We now relate the true incentive compatibility approximation factor to the solution to Equation (4) when the cover is constructed using Algorithm 1. The full proof is in Appendix C.
[] Given a set , a mechanism , and accuracy parameter , let be the cover returned by Algorithm 1. With probability over the draw of the sets , for every mechanism and every agent ,
(5) 
where . Moreover, with probability 1, .
3.1.2 Covering via a uniform grid
The greedy approach in Section 3.1.1 is extremely versatile: no matter the type space , when we use the resulting cover to estimate the incentive compatibility approximation factor (via Equation (4)), the estimate quickly converges to the true approximation factor. However, implementing the greedy procedure (Algorithm 1) might be computationally challenging because at each round, it is necessary to check if is nonempty and if so, select a vector from the set. In this section, we propose an alternative, extremely simple approach to selecting a cover : using a uniform grid over the type space. The efficacy of this approach depends on a “niceness” assumption that holds under mild assumptions, as we prove in Section 3.2. Throughout this section, we assume that for some integer . We propose covering the type space using a grid over , by which we mean a finite set of vectors in such that for all , there exists a vector such that . For example, if , we could define . We will estimate the expected incentive compatibility approximation factor using Equation (4) with . Throughout the rest of the paper, we discuss how the choice of effects the error bound. To do so, we will use the notion of dispersion, defined below. [Balcan et al. (2018a)] Let be a set of functions where each is piecewise Lipschitz with respect to the norm over a partition of . We say that splits a set if intersects with at least two sets in . The set of functions is dispersed if for every point , the ball is split by at most of the partitions .
The smaller is and the larger
is, the more “dispersed” the functions’ discontinuities are. Moreover, the more jump discontinuities a set of functions has, the more difficult it is to approximately optimize its average using a grid. We illustrate this phenomenon in Example
3.1.2.Suppose there are two agents and is the firstprice singleitem auction. For any trio of types , . Suppose . Figure 0(a) displays the function where and Figure 0(b) displays the function where .
In Figure 1, we evaluate each function on the grid . In Figure 0(a), the maximum over better approximates the the maximum over compared to Figure 0(b). In other words,
(6)  
(7) 
Intuitively, this is because the functions we average over in Figure 0(a) are more dispersed than the functions we average over in Figure 0(b). The differences described by Equations (6) and (7) are represented by the shaded regions in Figures 0(a) and 0(b), respectively.
We now state a helpful lemma which we use to prove this section’s main theorem. Informally, it shows that we can measure the average amount that any agent can improve his utility by misreporting his type, even if we discretize his type space, so long as his utility function applied to the samples demonstrates dispersion. The full proof is in Appendix C.
[] Suppose that for each agent , there exist such that with probability over the draw of the sets , for each mechanism and agent , the following conditions hold:

For any type , the functions are piecewise Lipschitz and dispersed.

For any reported type , the functions are piecewise Lipschitz and dispersed.
Then with probability over the draw of the sets , for all mechanisms and agents ,
(8) 
Proof sketch.
By definition of dispersion, the following conditions hold with probability :
Condition 1.
For all mechanisms , agents , types , and pairs of reported types , if , then
Condition 2.
For all mechanisms , agents , reported types , and pairs of types , if , then
We claim that Inequality (8) holds so long as Conditions 1 and 2 hold. To see why, suppose they do both hold, and fix an arbitrary agent , mechanism , and pair of types . Consider the average amount agent with type can improve his utility by misreporting his type as :
(9) 
By definition of the grid , we know there are points such that and . Based on Conditions 1 and 2, we show that if we “snap” to and to , we will not increase Equation (9) by more than an additive factor of . Since and are elements of , Inequality (8) holds so long as Conditions 1 and 2 hold. Since Conditions 1 and 2 hold with probability , the lemma statement holds. ∎
In Section 3.2, we prove that under mild assumptions, for a wide range of mechanisms, Conditions 1 and 2 in Lemma 1 hold with , , and , ignoring problemspecific multiplicands. Thus, we find that quickly converges to 0 as grows.
Theorem 3 and Lemma 1 immediately imply this section’s main theorem. At a high level, it states that any agent’s average utility gain when restricted to types on the grid is a close estimation of the true incentive compatibility approximation factor, so long as his utility function applied to the samples demonstrates dispersion. The full proof is in Appendix C.
[] Suppose that for each agent , there exist such that with probability over the draw of the sets , for each mechanism and agent , the following conditions hold:

For any , the functions
Comments
There are no comments yet.