Peeking Behind the Ordinal Curtain: Improving Distortion via Cardinal Queries

07/18/2019 ∙ by Georgios Amanatidis, et al. ∙ 0

Aggregating the preferences of individuals into a collective decision is the core subject of study of social choice theory. In 2006, Procaccia and Rosenschein considered a utilitarian social choice setting, where the agents have explicit numerical values for the alternatives, yet they only report their linear orderings over them. To compare different aggregation mechanisms, Procaccia and Rosenschein introduced the notion of distortion, which quantifies the inefficiency of using only ordinal information when trying to maximize the social welfare, i.e., the sum of the underlying values of the agents for the chosen outcome. Since then, this research area has flourished and bounds on the distortion have been obtained for a wide variety of fundamental scenarios. However, the vast majority of the existing literature is focused on the case where nothing is known beyond the ordinal preferences of the agents over the alternatives. In this paper, we take a more expressive approach, and consider mechanisms that are allowed to further ask a few cardinal queries in order to gain partial access to the underlying values that the agents have for the alternatives. With this extra power, we design new deterministic mechanisms that achieve significantly improved distortion bounds and, in many cases, outperform the best-known randomized ordinal mechanisms. We paint an almost complete picture of the number of queries required to achieve specific distortion bounds.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Social choice theory (Brandt et al., 2016) is concerned with aggregating the preferences of individuals into a joint decision. In an election, for instance, the winner should represent well (in some precise sense) the viewpoints of the voters. Similarly, the expenditure of public funds is typically geared towards projects that increase the well-being of society. Most traditional models assume that the preferences of individuals are expressed through ordinal preference rankings, where each agent sorts all alternatives from the most to the least favorable according to her. Underlying these ordinal preferences, it is often assumed that there exists a cardinal utility structure, which further specifies the intensity of the preferences (Von Neumann and Morgenstern, 1947; Bogomolnaia and Moulin, 2001; Barbera et al., 1998). That is, there exist numerical values that indicate how much an agent prefers an outcome to another. Given this cardinal utility structure, usually expressed via valuation functions, one can define meaningful quantitative objectives, with the most prominent one being the maximization of the utilitarian (or social) welfare, i.e., the sum of the values of the agents for the chosen outcome.

The main rationale justifying the dominance of ordinal preferences in the classical economics literature is that the task of asking individuals to express their preferences in terms of numerical values is arguably quite demanding for them. In contrast, performing simple comparisons between the different options is certainly more easily conceivable. To quantify how much the lack of cardinal information affects the maximization of quantitative objectives like the social welfare, Procaccia and Rosenschein (2006) defined the notion of distortion for mechanisms as the worst-case ratio between the optimal social welfare (which would be achievable using cardinal information) and the social welfare of the outcome selected by the mechanism, which has access only to the preference rankings of the agents. Following their agenda, a plethora of subsequent works studied the distortion of mechanisms in several different settings, such as normalized valuation functions (Boutilier et al., 2015), metric preferences (Anshelevich et al., 2018; Anshelevich and Postl, 2017), committee elections (Caragiannis et al., 2017), and participatory budgeting (Benade et al., 2017).

Somewhat surprisingly, the different variants of the distortion framework studied in this rich line of work differentiate between two extremes: we either have complete cardinal information or only ordinal information. Driven by the original motivation for using ordinal preferences, it seems quite meaningful to ask whether improved distortion guarantees can be obtained if one has access to limited cardinal information, especially in settings for which the worst-case distortion bounds are already quite discouraging (Boutilier et al., 2015). We formulate this idea via the use of cardinal queries, which elicit cardinal information from the agents. These queries can be as simple as asking the value of an agent for a possible outcome, or even asking an agent whether an outcome is at least times better than some other outcome, according to her underlying valuation function. Note that questions of the latter form are much less demanding than eliciting a complete cardinal utility structure, and thus are much more realistic as an elicitation device (see also the discussion below).

In this paper, we enhance the original distortion setting of Procaccia and Rosenschein (2006) and Boutilier et al. (2015) on single winner elections, by allowing the use of cardinal queries. In their setting, there are agents that have cardinal values over alternatives, and the goal is to elect a single alternative that (approximately) maximizes the social welfare, while having access only to ordinal information. Procaccia and Rosenschein (2006) proved that no deterministic mechanism can achieve a distortion better that when agents have unit-sum normalized valuation functions (i.e., the sum of the values of each agent for all possible alternatives is ), which was later on improved to by Caragiannis et al. (2017). Under the same assumption, Boutilier et al. (2015) proved that the distortion of any (possibly randomized) mechanism is between and . Here we show how – with only a limited number of cardinal queries – deterministic mechanisms can significantly outperform any mechanism that has access only to ordinal information, even randomized ones.

1.1 Our Contributions

We initiate the study of trade-offs between the number of cardinal queries per agent that a mechanism uses and the distortion that it can achieve. In particular, we show results of the following type:

The distortion of a mechanism that makes at most queries per agent is .

What our results suggest is that we can drastically reduce the distortion by exploiting only a small amount of cardinal information.

Query Model

We consider two different types of cardinal queries, namely value queries and comparison queries.

  • A value query takes as input an agent and an alternative , and returns the agent’s value for that alternative.

  • A comparison query takes as input an agent , two alternatives and a real number and returns “yes” if the value of agent for alternative is at least times her value for alternative , and “no” otherwise.

Note that value queries are in general stronger than comparison queries, as they reveal much more detailed information. On the other hand, comparison queries are quite attractive as an elicitation device, since the cognitive complexity of the question that they pose is not much higher than that of forming a preference ranking. Additionally, comparison queries can also be interpreted under the original utility framework defined by Von Neumann and Morgenstern (1947). The idea there is that a cardinal scale for utility is possible because agents are capable of not only performing comparisons between alternatives, but also between lotteries over alternatives. For example, an agent should be able to tell whether she prefers alternative with certainty, or alternative

with probability

. Assuming risk-neutrality, this is equivalent to asking the comparison query with parameters .

It should be clear that upper bounds (distortion guarantees) for comparison queries are stronger than those for value queries, whereas lower bounds (inapproximability bounds) are stronger when proven for value queries. All of our lower bounds are for value queries, while our main upper bounds extend to comparison queries as well.

Results and Techniques

We warm-up in Section 3 by using simple prefix value queries per agent (i.e., ask her at the first positions of her preference ranking). By selecting the alternative with the highest social welfare restricted to the query answers (the revealed welfare), we obtain a linear improvement in the distortion, specifically . We show that this result is asymptotically optimal, among all mechanisms that use prefix queries per agent.

In Section 4, we devise a class of more sophisticated mechanisms that achieve much improved trade-offs between the distortion and the number of queries. In particular, our class contains

  • a mechanism that achieves constant distortion using at most queries per agent, and

  • a mechanism that achieves a distortion of using queries, matching the performance of the best possible randomized mechanism in the setting of (Boutilier et al., 2015), and outperforming all known randomized mechanisms for that setting.

Our mechanisms are based on a binary search procedure, which for every agent finds the last alternative in the agent’s preference ranking such that the agent’s value for is at least times the value for her most-preferred alternative , for some chosen parameter . Then, the mechanism simulates the value of the agent for all alternatives that the agent ranks between and by her value for , and outputs the alternative that maximizes the simulated welfare. By repeatedly applying this idea for appropriately chosen values of , we explore the trade-offs between the distortion and the number of queries, when the latter range from to per agent.

In Section 5, we significantly improve on our result (second bullet above) for the fundamental case . We present a mechanism that achieves a distortion of , using only queries per agent. The core idea behind this mechanism is choosing an appropriate threshold and then carefully querying the agents based on this threshold. First, the mechanism queries every agent at the first position, and the remaining queries (one per agent) are made in successive steps. During the -th step, the mechanism queries about alternatives that are ranked at the first positions by at least agents, but only if such a query is meaningful and possible; we never repeat a query and we never ask an agent more than twice. The query process terminates in at most steps and the mechanism returns an alternative with maximum revealed welfare. This result demonstrates that with the clever use of very limited cardinal information, one can outperform all possible mechanisms (even randomized) in the purely ordinal setting.

In Section 6 we extend the ideas of Section 4 to show that the mechanism which achieves a constant distortion using value queries, can actually be transformed into a mechanism which uses the same number of comparison queries. In particular, we show how to approximate an agent’s value for her most-preferred alternative using only comparison queries.

In Section 7 we present several lower bounds on the possible achievable trade-offs between the number of queries and distortion. These bounds follow by explicit instances where we carefully define a single ordinal preference profile as well as the cardinal information that may be revealed by the value queries of any mechanism. This information is defined in such a way so that, no matter how the mechanism makes its selection, it is always possible to create a superconstant gap between the optimal social welfare and the social welfare of the winning alternative.

We conclude the paper in Section 8 with several interesting open problems, and a particular set of very challenging conjectures about the tight trade-offs between the number of queries and distortion.

An overview of our main results can be found in Table 1. An alternative representation of our results is given in Figure 1, which depicts the trade-offs between the number of queries and the distortion.

Remark 1 (Normalization assumptions).

We remark here that all of our upper bounds for value queries hold without any normalization assumption on the cardinal values, contrary to the results of (Procaccia and Rosenschein, 2006) and almost all subsequent works in the related literature, which typically assume that values are normalized according to the unit-sum normalization. We do use the unit-sum normalization in Section 6, where we use comparison queries.111Actually, our results hold even if one uses other reasonable normalizations. In particular, for the other common normalization assumption in the literature (Caragiannis et al., 2018; Feige and Tennenholtz, 2010; Filos-Ratsikas and Miltersen, 2014), the unit-range normalization, where the value of an agent for her most-preferred alternative is and all other values are in the interval , the results of Section 4 obviously extend verbatim to the case of comparison queries. For the lower bounds, we prove bounds both for normalized and unrestricted values.

Remark 2 (Noisy queries).

Throughout this work we implicitly assume that agents can accurately answer all value or comparison queries. In fact, this is not necessary for any of our positive results! That is, we may assume that the answers to the queries are noisy, e.g., because it requires extra effort for the agents to precisely determine these answers. As long as each inaccurate answer is at most a (multiplicative) constant factor away from the truth, all our upper bound proofs go through, at the expense of worse constants. Note that lower bounds are stronger when proven for exact queries, as is the case here.

Number of queries Upper Bounds Lower Bounds
(ordinal, deterministic) (Caragiannis and Procaccia, 2011)   (Caragiannis et al., 2017)
(ordinal, randomized)   (Boutilier et al., 2015)   (Boutilier et al., 2015)
(value)   [-PRV, Theorem 1]   [Theorem 9]
  [Theorem 11]
, constant (value)   [-TRV, Theorem 6]   [Corollary 5]
(value)   [-TRV, Theorem 6]   [Corollary 5]
(value)   [-ARV, Corollary 2]
(value)   [-ARV, Corollary 2]
(comparison)   [-ARV, Corollary 4]
Table 1: A table showing the most important results in the paper. All our results are for deterministic mechanisms. Results marked by hold for . Results for unit-sum valuation functions are highlighted; everything else is for unrestricted valuation functions.
Figure 1: A graphical representation of the trade-offs between the number of queries and the distortion. Points and lines in red represent upper bounds, while points and lines in blue represent lower bounds. The black points correspond to tight bounds, and the red square point corresponds to the distortion of the best-known randomized ordinal mechanism.

1.2 Related Work

The distortion framework was introduced by Procaccia and Rosenschein (2006), and has been studied subsequently in a series of papers, most prominently by Boutilier et al. (2015), who consider a general social choice setting, under the unit-sum normalization; this general model was also previously studied by Caragiannis and Procaccia (2011) who considered different methods to translate the values of the agents for the alternatives into rankings (embeddings), and more recently by Filos-Ratsikas et al. (2019) who bounded the distortion of deterministic mechanisms in district-based elections. A related model is that of distortion of social choice functions in a metric space, which was initiated by Anshelevich et al. (2018), and has since then been studied extensively (Anshelevich and Postl, 2017; Goel et al., 2017; Fain et al., 2019; Goel et al., 2018; Anshelevich and Zhu, 2018; Pierczynski and Skowron, 2019; Gross et al., 2017; Cheng et al., 2017, 2018; Feldman et al., 2016; Ghodsi et al., 2019; Borodin et al., 2019; Munagala and Wang, 2019). In this setting, there is no normalization of values (or costs), but the valuation (or cost) functions are assumed to satisfy the triangle inequality. Similar distortion frameworks, in a metric space or under normalizations, have also been studied for other related problems, such as matching and clustering (Anshelevich and Sekar, 2016; Abramowitz and Anshelevich, 2018; Anshelevich and Zhu, 2017, 2018; Filos-Ratsikas et al., 2014).

Two related variants of the problem are -winner elections, where alternatives are to be elected instead of one (Caragiannis et al., 2017; Benade et al., 2019), and participatory budgeting, where every alternative is associated with a cost, and one or more alternatives have to be elected in a manner that ensures that the total cost does not exceed a pre-specified budget constraint (Lu and Boutilier, 2011). Benade et al. (2017) studied the -winner participatory budgeting problem, but interestingly, they considered a more expressive model for the preferences of the agents, compared to simple preference rankings. In particular, they considered the knapsack votes model of (Goel et al., 2016), rankings by value, rankings by value-for-money and threshold votes. While the first three are not very relevant for our purposes, the latter one can be thought of as a different type of (more expressive) query, in which a numerical value is specified, and every agent is asked to return the set of alternatives for which her value is above this threshold. Bhaskar et al. (2018) used a different model with thresholds drawn from to construct a randomized social choice function that approaches a distortion of as the number of agents approaches infinity.

Very recently, Mandal et al. (2019) study a related model to ours, in which agents are asked to provide cardinal information, but there is a restriction on the number of bits to be communicated to the mechanism. Hence, they study trade-offs between the number of transmitted bits and distortion. This is markedly quite different from what we do here, as a query in their setting has access to the (approximate) values of an agent for many alternatives simultaneously, and is therefore much too expressive when translated in our setting. On the other hand, the setting of Mandal et al. (2019) does not assume “free” access to the ordinal preferences, which are also considered as part of the elicitation process. We consider our work complementary to theirs, as they are mostly motivated by the computational limitations of elicitation (corresponding to a communication complexity approach), whereas we are motivated by the cognitive limitations of eliciting cardinal values, as often highlighted in the classical literature of social choice (corresponding to a query complexity approach).

Finally, at the same time and independently of our work, Abramowitz et al. (2019) also introduce a setting in which the mechanism designer has access to some cardinal information on top of the ordinal preferences. This enables the design of improved mechanisms in terms of distortion. While the motivation of their paper is the same as ours, the approaches are inherently different. Besides the fact that Abramowitz et al. (2019) study a metric distortion setting, whereas we study a general setting with valuation functions that which are either unrestricted or normalized according to unit-sum, there is another fundamental distinction. The access to the cardinal information in (Abramowitz et al., 2019) is not via queries. Instead, it is given explicitly as part of the input in terms of a threshold , which allows the designer to know the number of agents for which the distance to an alternative is at most times their distance to another alternative .

2 The model

We consider a standard social choice setting, in which there is a set of alternatives and a set of agents. Our goal is to elect a single alternative based on the preferences of the agents, which are expressed through valuation functions that map alternatives to non-negative real numbers. For notational convenience, we use instead of to denote the cardinal value of agent for alternative , and refer to the matrix as a valuation profile. By we denote the set of all possible valuation profiles. Clearly, the valuation function also defines a preference ranking for agent , i.e., a linear ordering of such that if ; we assume that ties are broken according to a deterministic tie-breaking rule, e.g., according to a fixed global ordering of the alternatives.222It would be equivalent to allow ties at this point, get pre-linear orderings instead, and leave the tie-breaking to the mechanisms when necessary. We refer to as an (ordinal) preference profile.

In this work, we consider the following two families of valuation functions:

  • Unrestricted valuation functions, which may take any non-negative real values.

  • Unit-sum valuation functions, which are such that for every agent .

The social welfare of alternative with respect to is the total value of the agents for : . Our goal is to output one of the alternatives who maximize the social welfare, i.e., an alternative in . This is clearly a trivial task if one has full access to the valuation profile. However, we assume limited access to these cardinal values. In particular, we assume that we only have access to the preference profile and can also learn cardinal information by asking queries. We consider two types of queries: value queries that reveal the value of an agent for a given alternative, and comparison queries that reveal whether the value of an agent for an alternative is a multiplicative factor larger than her value for some other alternative.

Definition 1.

Given a preference profile, a query about the underlying cardinal values is called

  • A value query, if it takes as input an agent and an alternative and returns the agent’s value for that alternative. This is implemented via the function . We say that agent is queried at position , if alternative is ranked -th in and we make the query .

  • A comparison query, if it takes as input an agent , two alternatives , and a real number , and returns yes if , and no otherwise. This is implemented via the function .

Clearly, value queries reveal more information than comparison queries. Note that the information obtained by a comparison query can be obtained by at most two value queries. On the other hand, however, without any cardinal information or any normalization assumption, it is impossible to even approximate the information obtained by a value query using only comparison queries. In this sense, value queries are considerably stronger than comparison queries.

Definition 2.

A mechanism with access to a (value or comparison) oracle takes as input a preference profile and returns an alternative. In particular, it consists of the following two parts:

  • An algorithm that takes as input the preference profile , adaptively makes queries to the oracle, and returns the set of answers to these queries.

  • A mapping that takes as input the preference profile and the set of answers to the queries above, and outputs a single alternative . Such a mapping is called a social choice function.

By the description of above, it is clear that the mechanism is free to choose the positions at which each agent will be queried, and those can depend not only on , but on the answers to the queries already asked as well. The performance of a mechanism is measured by its distortion.

Definition 3.

The distortion of a mechanism is

where is the social welfare of alternative given a particular valuation profile, and is the output of the mechanism on input .

Throughout our proofs, it will be useful to partition the quantity , into two separate quantities depending on the cardinal information we obtain from the queries. This is particularly relevant when we deal with value queries, but even for comparison queries we use a similar decomposition in Section 6.

Definition 4.

The revealed welfare of is the contribution to of agents that have been queried for alternative via value queries, i.e., . The remaining quantity is called the concealed welfare of .

3 Warm-Up: Mechanisms Using Fixed-Position Value Queries

We start the presentation of our technical results with the class of mechanisms that query every agent at the first positions. A particular member of this class is the mechanism that uses the Range Voting (RV) social choice function to decide the outcome. Formally, RV takes as input the whole valuation profile and elects an alternative with maximum social welfare: . In our case, since is not fully known, we deploy RV only on the revealed valuation profile, where any unknown value is assumed to be zero.

To be more specific, let be the set of agents that rank alternative at position . Our mechanism first queries every agent at each of the first positions of her preference ranking. Then, it elects the alternative that maximizes the revealed welfare: . We refer to this mechanism as -Prefix Range Voting (1 Mechanism -PRV).

1 -PRV for   do
2      
3for  do
4       for  do
5             Make a query to learn , where is the -th favorite alternative of agent
6      
Let be an alternative achieving the best revealed welfare return
Mechanism 1 Mechanism -PRV
Theorem 1.

The distortion of -PRV is , even for unrestricted valuation functions.

Proof.

Consider some instance with valuation profile . Let be an alternative that maximizes the social welfare according to , and let be the alternative that is elected by -PRV. Recall that here the revealed welfare of any alternative is . Since , it suffices to show that . To this end, we will bound the revealed and the concealed welfare of separately.

Since is an alternative that maximizes the revealed welfare, we have that for every , and therefore

(1)

Now, consider the agents in . They are not queried about their value for , and therefore contribute to the concealed welfare of . For every such agent there exist different alternatives that ranks above , and for whom she has value .333When the subscripts have subscripts themselves, we follow the common practice of separating them with commas. Consequently, we have that

(2)

The statement now follows by (1) and (2). ∎

Clearly, the distortion guarantee of -PRV improves linearly in the number of queries . Nevertheless, it is interesting to see for which values of the mechanism achieves distortion and . These are given by the following statement.

Corollary 1.

The distortion of -PRV is

Next, we show that, in terms of distortion, -PRV is the best possible mechanism among those that make at most prefix value queries.

Theorem 2.

Any mechanism that makes prefix value queries per agent has distortion , even for unit-sum valuation functions.

Proof.

Consider an instance with agents and alternatives . Let . We define the following ordinal profile:

  • The favorite alternatives of agent are in decreasing order, where all the indices are considered modulo . Hence, all alternatives appear exactly once at each of the first positions.

  • Alternatives and appear times each at position , in the agent rankings in which they do not appear at the first positions. Observe that, by definition, and do not appear together at the first positions in any preference ranking, and there are multiple ways to decide in which rankings each of them appears at position ; any such construction works for our purposes.

  • For every agent, the remaining alternatives are arbitrarily ordered at positions up to .

See Table 2 for a specific example of the ordinal profile.

 

agent ranking

 

1
2
3
4
5
6

 

Table 2: An example of the ordinal profile used in the proof of Theorem 2 with and , where , , and .

The valuation profile is such that each agent has value for her first favorite alternatives. It is without loss of generality to assume that any mechanism that knows the ordinal information of this instance and also makes prefix value queries, must elect either or . To see this, first notice that given the revealed cardinal information, the revealed welfare of all alternatives is the same. Further, given the particular preference profile, it is easy to always complete the valuation profile in a way that guarantees that no alternative has more concealed welfare than and ; indeed, the two possible valuation profiles we consider have this property.

So, assume that the mechanism selects alternative (the case of being completely symmetric). Now the remaining values of the agents are such that the agents in have value for and for the remaining alternatives, while the agents in have value for all alternatives at positions up to .

Given this valuation profile , the social welfare of the winner is

In contrast, the social welfare of the optimal alternative is

Therefore, the distortion of any mechanism is at least . ∎

We now turn our attention to a slightly more general class of mechanisms which query all agents at the same fixed positions, and show that -PRV remains best possible among the mechanisms of this class for unrestricted valuation functions. In Section 7 we further show that -PRV is best possible among all mechanisms that make one query per agent for unrestricted valuation functions.

Theorem 3.

For unrestricted valuation functions, any mechanism that makes fixed-position value queries per agent has distortion .

Proof.

Let . Consider any mechanism of this class, and let be the first position at which it does not query the agents. Observe that if , then the mechanism only makes prefix value queries. In this case, the bound follows by Theorem 2, which holds for unit-sum valuation functions, and thus for unrestricted ones as well. So, we may assume that .

Now, we consider an instance with that is very similar to the one presented in the proof of Theorem 2. Essentially, we substitute with , and we have that all alternatives appear exactly once at each of the first positions, while two alternatives and appear times each at position . The remaining alternatives for every agent are arbitrarily ordered at position up to .

The valuation profile is such that each agent has value for her first favorite alternatives, and value for the alternatives at positions up to . Observe that the revealed welfare of all alternatives is exactly equal to . Given the revealed cardinal information and the particular ordinal profile, we can argue exactly like we did in the proof of Theorem 2 about fact that it is without loss of generality to assume that the mechanism elects either or . So, assume that the mechanism selects alternative ; the case of is symmetric. The remaining values of the agents are such that the agents in have value for , while the agents in have value for .

Given this valuation profile , the social welfare of the winner is , while the social welfare of the optimal alternative is . Therefore, the distortion of the mechanism is . ∎

4 Achieving Constant Distortion

Our goal in this section is to further explore the additional power that cardinal queries provide, and focus on the design of mechanisms with improved distortion guarantees. Mechanism -PRV is a good first step in this direction, but it needs to make a quite large number of queries per agent in order to do so; in particular, by Corollary 1, it achieves distortion for and constant distortion for . Therefore, it is natural to ask whether it is possible to design mechanisms that achieve similar distortion bounds, but require much less queries per agent. We manage to answer this question positively.

For any , we define a mechanism which we call -Acceptable Range Voting (2 Mechanism -ARV). Let be thresholds such that for . For every agent , we first query her value for her favorite alternative and let . Then, using binary search we compute the maximal -acceptable set for every . We continue by constructing a new approximate valuation profile , where the values of every agent are

  • ;

  • for every with ;

  • for every .

We finally elect the alternative that maximizes the social welfare according to the approximate valuation profile: .

1 -ARV for   do
2       Let , where is the alternative that agent ranks at position Let Let for   do
3             Let Set   BSearch()
             Let   /* define the -acceptable set of agent */
4             for  do
                   Let   /* define the approximate valuation profile */
5                  
6            
7      for  do
8             Let
9      
10for  do
       Let   /* compute the simulated welfare of alternative */
11       for  do
12             Set
13      
14Let be an alternative achieving the best simulated welfare return Procedure BSearch(,,,)
15       if  then
16             return
17      Let
18       if  then
19             BSearch()
20      else
21             BSearch()
22      
Mechanism 2 Mechanism -ARV

Now, we proceed by proving an upper bound on the distortion achieved by -ARV as a function of .

Theorem 4.

The mechanism -ARV makes direct value queries per agent, and has distortion .

Proof.

Consider any instance with valuation profile . Since mechanism -ARV executes a binary search in order to compute the -acceptable sets for each , it requires a total of value queries per agent. The rest of the proof is dedicated in bounding the distortion of -ARV. First, we define some useful notation:

  • is the alternative elected by -ARV;

  • is a welfare-maximizing alternative for the valuation profile , which is such that the value of agent for alternative is

    That is, .

  • is the welfare-maximizing alternative for the true valuation profile . That is, .

Also, let be the set of agents with strictly positive value for alternative . We use the following easy fact about welfare-maximizing alternatives.

Lemma 1.

If , then .

To prove the statement, we will bound the social welfare of in terms of the social welfare of for the true valuation profile . In particular, we will show that

(3)

Then, the approximation ratio of -ARV will be

We partition the social welfare of into the following two quantities: the contribution of the agents that place in the -acceptable set , and the contribution of the remaining agents that have small value for . By definition, we have that for any agent such that , and therefore

We first consider the term , and have that

(4)

where

  • the first inequality follows by the definition of , the simple fact that , and Lemma 1;

  • for the second inequality it suffices to notice that for any there exists an such that , and thus ;

  • the third inequality follows by the definition of , the simple fact that , and Lemma 1;

  • the fourth inequality follows by the fact that , for every and ;

  • the last inequality is obvious.

Next, we consider the term . By the definition of , for every it holds that , and hence . Using this, we obtain

(5)

where is the set of agents whose favorite alternative is , and for whom . Since it the alternative that maximizes the quantity , for every we have that

Combining the above inequality together with the fact that for every agent , we have that

Using this last inequality, (5) becomes

(6)

Finally, the desired inequality (3) follows by combining inequalities (4) and (6). ∎

The next statement follows by appropriately setting the value of the parameter in Theorem 4, and shows how mechanism -ARV improves upon the distortion guarantees of -PRV using way less value queries per agent.

Corollary 2.

We have that

  • -ARV achieves distortion using values queries per agent;

  • -ARV achieves distortion using value queries.

We conclude this section by showing that the analysis of -ARV is tight.

Theorem 5.

The distortion of -ARV is .

Proof.

Recall that and consider the following instance with alternatives and agents. To simplify our discussion, let and . The valuation profile is such that the values of agent are

  • ,

  • , and

  • for .

In the ordinal profile which is given as input to the mechanism, we assume without loss of generality that agent ranks alternative ahead of .

Since , -ARV defines only one acceptable set per agent using . In particular, the algorithm sets for every agent . Then, the approximate valuation profile is such that the values of agent are

  • ,

  • , and

  • for

For the approximate valuation profile , the social welfare of both alternatives and is

while any other alternative has social welfare

Hence, -ARV might select alternative as the winner instead of , and the distortion is then

as desired. ∎

5 Achieving Distortion with Two Value Queries

Here we present a more sophisticated mechanism, which makes two value queries per agent, and refines the main ideas of the previous two sections. Like before, the first query is used to learn the value of each agent for her favorite alternative. However, we would like to avoid making a naive second query as we do with -PRV. Ideally, we would like to ask each agent about an alternative that is qualitatively similar to the one identified by -ARV; in other words, we would like to reveal for each agent the position where her value is roughly of that for her favorite alternative. Although maintaining the same guarantee as -ARV, while substituting each binary search with a single query seems far-fetched, we do come very close. By utilizing the available ordinal information globally rather than per agent, our mechanism achieves distortion with just two value queries, under the assumption that . The crucial idea is that the second query for each agent depends on the number of appearances of the alternatives in the whole ordinal preference profile.

For any threshold we define a mechanism called -Threshold Range Voting (3 -TRV). Before stating the mechanism formally, we give a short high-level description. As noted above, the first query for each agent is used to ask about her favorite alternative. The remaining queries are made in successive steps. During the -th step we make queries about alternatives that are ranked within the first positions by at least agents. These queries are made only if they are meaningful and possible: we never repeat a query and we never ask an agent more than twice. After at most steps, -TRV returns an alternative with maximum revealed welfare.

We state the mechanism -TRV, as well as the main result of this section, Theorem 6, with respect to a general threshold . Depending on the ranges of and we appropriately set to get Corollary 3. Recall that is the set of agents that rank alternative at position . Thus, the set of eligible alternatives at -th step contains all alternatives for which .

1 -TRV for   do
2      
3for   do
4       Make a query to learn , where is the top alternative of agent
5 for  do
         /* eligible alternatives of -th step */
         /* maintain the set of active agents who have been queried once */
6       for  do
7             for  do
8                   if   then
                         Make a query to learn   /* agent becomes permanently inactive */
9                        
10                  
11            
12      
Let be an alternative achieving the best revealed welfare return
Mechanism 3 -TRV
Theorem 6.

The mechanism -TRV has distortion , even for unrestricted valuation functions.

Proof.

Consider any instance with valuation profile . If , then no alternative will ever be eligible, and hence -TRV coincides with mechanism -PRV from Section 3; by Theorem 1, the distortion is at most . Consequently, in the rest of the proof we focus on the case . Let be the alternative elected by -TRV, and be an alternative that maximizes the social welfare. By the definition of the revealed welfare, we have . We are going to show that ; then the statement follows.

By partitioning the optimal welfare into the revealed and the concealed welfare of , we have

(7)

where the inequality follows by the optimality of with respect to the revealed welfare.

Next, we focus on bounding the quantity . Let be such that but . By definition, for any we have that , and thus is unique. In fact, is the smallest step in which alternative becomes eligible. We can further partition the concealed welfare of as

(8)

where

  • is the contribution to of agents who rank at some position ;

  • is the contribution to of agents who rank at some position .

If , then it is straightforward that . So, assume that . Observe that since for any , there can be at most agents who rank before position . If is such an agent, an obvious upper bound for the corresponding concealed value is , and by the queries in lines 33 of mechanism -TRV, we further know that