1. Introduction
Legal definitions of discrimination, such as disparate treatment Barocas:2016aa; Zimmer:1996aa and disparate impact Barocas:2016aa; Rutherglen:1987aa,^{1}^{1}1Disparate treatment and disparate impact are two main legal definitions of discrimination describing situations where an individual is intentionally discriminated or treated differently based on the individual’s classification in a protected group and a policy is affecting certain or indirectly discriminating individuals in a key group comparing to other individuals, respectively. and recent social movements (e.g., Black Lives Matter) have demonstrated the importance of group fairness in various settings. Ensuring group fairness in terms of equality or equity among groups of individuals in our society is desirable for many domains such as voting (see e.g., Endriss:2017aa; Bredereck:2018aa), fair division (see e.g., Conitzer:2019aa; Fain:2018aa; SegalHalevi:2018aa; Todo:2011aa; Barman:2018aa; Suksompong:2018aa), matching (see e.g., Ahmed:2017aa; Benabbou:2019aa), and scarce resource allocations (e.g., kidneys and homeless youth housing Bertsimas:2013aa; Azizi:2018aa). This is especially crucial in settings where the social planner is interested in achieving socially desirable outcomes for optimization problems with predefined social objectives that provide fair opportunities and accessibility for various groups of agents. For instance, determining the best location of a public facility, such as a park or a library (see Figure 1),
to serve a subset of agents as to provide fair access to different groups (e.g., based on race, gender, or age). Or, determining the best candidate(s) to select in an election within an organization as to ensure fair representative for groups of agents are typical examples the social planner encounters frequently. The common feature that arises from many of the social planner’s problems is that the optimization problems require input (e.g., agents’ locations or agents’ candidate preferences) from the strategic agents of various groups. As a result, the social planner is facing a difficult challenge of generating groupfair outcomes while taking into account that the selfreported information from the strategic agents may not be truthful. To address this challenge, we propose to consider the approximate mechanism design without money paradigm with groupfair objectives.
In particular, we are interested in ensuring group fairness for groups of agents in settings of approximate mechanism design without money Procaccia:2009aa; Dekel:2008aa; Meir:2008aa; Meir:2009aa, the algorithmic mechanism design paradigm explicitly coined and advocated by Procaccia and Tennenholtz in 2009 Procaccia:2009aa, aiming to study the design of strategyproof and approximately optimal mechanisms without payment that achieve strategyproofness through trading off solution optimality for gametheoretic optimization problems where the problems’ inputs are (possibly untruthfully) provided by strategic agents and the problems’ optimal solutions cannot induce strategyproofness. In such a setting, the social planner aims to ensure the mechanisms’ solutions are fair to groups of agents where payment is not allowed by regulations or designs in specific structured domains (i.e., to avoid the classical social choice impossibility results Gibbard:1973aa; Satterthwaite:1975aa; Barbera:1990aa).
As our main case study, we consider the facility location problems (FLPs), the initial case study in the seminal approximate mechanism design without money work Procaccia:2009aa, where the set of agents in the problems have the wellknown and studied singlepeaked preferences (see e.g., Moulin:1980aa; Barbera:2001aa; Sprumont:1995aa; Schummer:2002aa; Border:1983aa; Barbera:1998aa). In the most basic FLP, each agent has a privately known location on the real line, and the optimization problem is to locate a facility (e.g., a public library, park, or candidate) on the real line to minimize some (e.g., total or max) cost objective of agents to the facility (e.g., or ). Naturally, the goal is to derive a mechanism (without payment) that elicits true location information from the agents and locates a facility to (approximately) optimize the given objective. Such a basic setting is wellunderstood by now – some mechanisms (e.g., median and endpoint) are strategyproof and approximately optimal for various cost objectives (e.g., total cost and maximum cost), and the subsequent works have explored the settings with more than one facility Procaccia:2009aa; Lu:2010aa; Fotakis:2014aa; paolo2015heterogeneous; DBLP:conf/aaai/ChenL020; DBLP:conf/aaim/ChenFLD20; DBLP:conf/atal/ZouL15; DBLP:conf/ijcai/Li0Z20, various objectives and settings Sui:2013aa; Sui:2015aa; FilosRatsikas:2017aa; Cai:2016aa; feldman2013strategyproof; limei2016newobjective, capacity constraints Aziz:2019aa; Aziz:2020aa, and automated mechanism design Narasimhan:2016aa; Golowich:2018aa. See also survey for a survey on approximate mechanism design without money for facility location problems.
In this paper, we consider the previously unexplored groupfair facility location problems in the approximate mechanism design without money setting. In such a setting, the social planner aims to design strategyproof mechanisms to locate a facility to ensure the costs of groups are fair subject to a given objective. Moreover, we address the following key questions.
(1) How should one define groupfair objectives?
(2) How should one design strategyproof mechanisms to (approximately) optimize a given groupfair objective?
Beyond the initial case study on facility location problems, approximate mechanism design without money also applied to settings such as generalized assignment Dughmi:2010aa, voting Alon:2011aa; Feldman:2016aa, fair division Cole:2013aa, classification Meir:2012aa, and scheduling Koutsoupias:2014aa.
1.1. Our Contribution
Motivated by the importance of ensuring group fairness among groups of agents, we consider the groupfair FLPs where the set of agents is partitioned into groups, , based on some criteria (e.g., gender, race, or age) and aim to design mechanisms that elicit true location information from the agents and locate the facility to serve groups of agents to ensure desired forms of group fairness under appropriate groupfair objectives. Although previous works in this area have not considered the notion of group fairness (e.g, for some ), a few works have considered some form of individual fairness (i.e., the maximum cost objective when ) and envy in general (e.g., see Cai:2016aa). We note that there is a line of works (see e.g., mcallister1976equity; marsh1994equity; Mulligan:1991ug) in the optimization version of FLPs that considers various group fairness objectives, which we will consider and elaborate later in the paper (see Sections 2.2 and 3.1).
We first consider several natural groupfair (cost) objectives that are motivated by existing algorithmic fairness and FLP studies (see e.g., marsh1994equity). We then introduce the objectives that capture intergroup and intragroup fairness (IIF) for agents that are within each group, which is an important characteristic to be considered in the social science domain when studying fairness in group dynamics (see e.g., haslam2014social). Our results are summarized in Table 1.
Objectives  UB  LB 
Maximum total group cost  3  2 
Maximum average group cost  3  2 
(randomized)  2  1.5 
4  4  
4 

We show that not every groupfair objective has a bounded approximation ratio.

For the groupfair objective that aims to minimize the maximum total group cost, we show that the classic median mechanism Procaccia:2009aa is strategyproof with an approximation ratio of . In contrast to the classical setting of , the median mechanism is optimal. We then propose a new strategyproof mechanism that leverages group information and obtains an improved approximation ratio of 3. We complement our result by providing a lower bound of 2 for this objective.

For the groupfair objective of minimizing the maximum average group cost, we show that the median mechanism is strategyproof and has an approximation ratio of 3. We also provide a lower bound of 2 for this objective. We further design a randomized mechanism with an approximation ratio of 2 and provide a randomized lower bound of 1.5.

We introduce the notion of intergroup and intragroup fairness (IIF) that considers group fairness among the groups and within groups. We consider two objectives based on the IIF concept: and , where the first considers these two fairness measures as two separate indicators, while the second considers these two fairness measures as one combined indicator for each group. For both of them, we establish tight upper bounds (from the median mechanism) and lower bounds of . It is interesting to see that when one considers only intragroup fairness, only the additive approximation is obtainable Cai:2016aa. When we combine it with intergroup fairness, we can obtain a multiplicative approximation.
We note that while our proposed mechanisms are simple, rigorous nontrivial proofs are necessary to derive the approximation ratios. As our bounds (Table 1) have small gaps for objectives under the simple mechanisms, we did not explore more complicated mechanisms beyond the randomized one. In fact, the results are quite positive as we demonstrate the efficiency of simple mechanisms.
1.2. Related Work
To our best knowledge, groupfair facility location problems, where groups of agents are explicitly modeled with groupfair objectives, have not been studied before in the context of approximate mechanism design without money. In this section, we highlight works that consider notions of fairness in mechanism design settings, group fairness in fair division settings, and general diversity constraints in optimization context and applications related to our work.
Fair Division.
In fair division, one wishes to allocate a set of (indivisible or divisible) resources/items to a set of agents. Each agent has a utility function specifying the value for each subset of items. The goal is to ensure (1) the allocation is fair for each agent or (2) each agent does not “envy” other agents. There are measures for both (1) (e.g., proportionality and maximin share) and (2) (e.g., envyfree and its relaxations) Walsh:2020aa; Klamler:2010aa. The group envyfree concepts (see e.g., Berliant:1992aa; Varian:1974aa; Schmeidler:1972aa; Husseinov:2011aa; Conitzer:2019aa; Suksompong:2018aa; SegalHalevi:2018aa) seek to ensure each possible pair of groups of agents (resp. each group of agents) do not envy in terms of redistribution of items from other groups (resp. other groups of agents without redistribution). These works do not consider the mechanism design settings. From the mechanism design perspective, previous works have considered fair allocation without group fairness optimization objectives for divisible and indivisible items under general welfare objectives or individual fairness objectives (see e.g., Chen:2013aa; Bei:2020aa; Bei:2017aa; Barman:2019aa; Cole:2013aa; Maya:2012aa; Amanatidis:2016aa; Aumann:2016aa; Mossel:2010aa; Sinha:2015aa).
Applications with Diversity Objectives or Constraints.
Optimization problems with diversity objectives or constraints have been considered in various applications such as influence maximization (see e.g., Ali:2019aa; Tsang:2019aa; Farnad:2020aa), voting (see e.g., Endriss:2017aa; Bredereck:2018aa), fair division (see e.g., Conitzer:2019aa; Fain:2018aa; SegalHalevi:2018aa; Todo:2011aa; Barman:2018aa; Suksompong:2018aa), matching (see e.g., Ahmed:2017aa; Benabbou:2019aa), and scarce resource allocations (e.g., kidneys and homeless youth housing Bertsimas:2013aa; Azizi:2018aa; Patel:2020aa). We note that there are fairness notions for the classic stable matching problems/algorithms Gale:1962aa; Iwama:2008aa; Dubins:1981aa; Huang:2006aa (e.g., Rawlsian justice  the algorithmic ordering of the agents MASARANI:1989aa; Pini:2011aa; RomeroMedina:2001aa, procedural justice  via uniform lottery Klaus:2006aa, and sexfair Nakamura:1995aa). Other stable matching work considers upper and/or lower quotes on each type/group and other diversity and ratio constraints in the match/settings (see e.g., Abdulkadiroglu:2005aa; Bo:2016aa; Ehlers:2014aa; Agoston:2018aa; Fragiadakis:2017aa; Hafalir:2013aa; Kominers:2013aa; Gonczarowski:2019aa; Yahiro:2020aa; Nguyen:2017aa) without group fairness optimization objectives.
Facility Location Problems.
We will elaborate below the most related works of facility location problems (FLPs) to ours that consider some forms of individual fairness and envy (i.e., a special case of group fairness). From the optimization perspective, early works (see e.g., mcallister1976equity; marsh1994equity; Mulligan:1991ug) in facility location problems have examined objectives that quantify various equity notions. For instance, marsh1994equity considers a groupfairness objective (i.e., the Center objective in marsh1994equity) that is equivalent to our groupfair objective and similar to our groupfair objective. However, these works do not consider FLPs from the mechanism design without money perspective.
The seminal work of approximate mechanism design without money in FLPs Procaccia:2009aa considers the design of strategyproof mechanisms that approximately minimize certain cost objectives such as total cost or maximum cost. An individual fair objective that is considered in Procaccia:2009aa is that of the maximum cost objective, which aims to minimize the maximum cost of the agent farthest from the facility (i.e., ). For the maximum cost objective, Procaccia:2009aa establishes tight upper and lower bounds of for deterministic mechanisms and for randomized mechanisms. However, applying these mechanisms to some of our objectives directly, such as and randomized part in , would yield worse approximation ratios.
In terms of envy, there are envy notions such as minimax envy Cai:2016aa; DBLP:conf/aaim/ChenFLD20, which aims to minimize the (normalized) maximum difference between any two agents’ costs, and envy ratio DBLP:conf/aaim/LiuDCFN20; ding2020facility, which aims to minimize the maximum over the ratios between any two agents’ utilities. Other works and variations on facility location problems can be found in a recent survey survey.
2. Preliminary
In this section, we define groupfair facility location problems and consider several groupfair social objectives. We then show that some of these objectives have unbounded approximation ratios.
2.1. GroupFair Facility Location Problems
Let be a set of agents on the real line and be the set of (disjoint) groups of the agents. Each agent has the profile where is the location reported by the agent and is the group membership of agent . We use to denote the number of the agents in group . Without loss of generality, we assume that . A profile is a collection of the location and group information. A deterministic mechanism is a function which maps profile to a facility location . A randomized mechanism is a function , which maps profile to a facility location , where
is a set of probability distributions over
. Let be the distance between two points . Naturally, given a deterministic (or randomized) mechanism and the profile , the cost of agent is defined as (or the expected distance ).Our goal is to design mechanisms that enforce truthfulness while (approximately) optimizing an objective function.
Definition 0.
A mechanism is strategyproof (SP) if and only if an agent can never benefit by reporting a false location. More formally, given any profile , let be a profile with the false location reported by agent . We have where is the profile reported by all agents except agent .
In the following, we discuss several groupfair social objectives that model some form of equity. We invoke the legal notions of disparate treatment Barocas:2016aa; Zimmer:1996aa and disparate impact Barocas:2016aa; Rutherglen:1987aa, the optimization version of FLPs mcallister1976equity; marsh1994equity; Mulligan:1991ug, and recent studies in optimization problems Tsang:2019aa; Celis:2018aa to derive and motivate the following fairness objectives.
Groupfair Cost Objectives.
We consider defining the group cost from two perspectives. One is the total group cost, which is the sum of the costs of all the group members. Through the objective, we hope to ensure that each group as a whole is not too far from the facility. Hence, our first groupfair social objective is to minimize the maximum total group cost (mtgc). More specifically, given a true profile and a facility location ,
Our second groupfair objective is to minimize the maximum average group cost (magc). Therefore, we have
and we hope to ensure that each group, on average, is not too far from the facility. We measure the performance of a mechanism by comparing the objective that achieves and the objective achieved by the optimal solution. If there exists a number such that for any profile , the output from is within times the objective achieved by the optimal solution, then we say the approximation ratio of is .
2.2. Alternative GroupFair Social Objectives
In addition to the objectives defined earlier, we can also consider the following natural objectives for groupfair facility location problems:
where is a function that can be (i) , (ii) or (iii) , which implies that each of (a) and (b) has three different groupfair objectives. In general, both (a) and (b) capture the difference between groups in terms of difference and ratio, respectively, under the desirable . For objectives under type (a), it can be seen as a group envy extended from individual envy works Cai:2016aa; DBLP:conf/aaim/ChenFLD20, and type (a) with function (i) is exactly measure (7) in marsh1994equity. For objectives under type (b), it can be seen as a group envy ratio extended from previous individual envy ratio studies DBLP:conf/aaim/LiuDCFN20; ding2020facility.
Surprisingly, we show that any deterministic strategyproof mechanism for those objectives have unbounded approximation ratios.
Theorem 2.
Any deterministic strategyproof mechanism does not have a finite approximation ratio for minimizing the three different objectives (i), (ii), and (iii) under (a).
Proof.
We prove this theorem by contradiction. Assume that there exists a deterministic strategyproof mechanism with a finite approximation ratio for those objective functions. Consider a profile with one agent in group at and one agent in group at . The optimal location is for all (i), (ii), (iii) under (a) and all of their objective values are . Therefore, has to output , otherwise the objective value for is not equal to and then the approximation ratio is a nonzero number divided by zero, which is unbounded.
Then consider another profile with one agent in group at and one agent in group at . The optimal location is for all (i), (ii), (iii) under (a) and all of their objective values are and then has to output . In that case, given the profile , the agent at can benefit by misreporting to 1, thus moving the facility location from to . This is a contradiction to the strategyproofness . ∎
Theorem 3.
Any deterministic strategyproof mechanism does not have a finite approximation ratio for minimizing the three different objectives (i), (ii), and (iii) under (b).
Proof.
We will reuse the profiles in Theorem 2 and prove this theorem by contradiction. Assume that there exists a deterministic strategyproof mechanism with a finite approximation ratio. Consider a profile with one agent in group at and one agent in group at . The optimal location is and without loss of generality, we assume that , .
Then consider another profile with one agent in group at and one agent in group at . Then can output any location except and , otherwise all (i), (ii), (iii) under (b) are unbounded but the optimal objective value is , then the approximation ratio is unbounded. In that case, given the profile , the agent in group can benefit by misreporting to , in contradiction to strategyproofness. ∎
Notice that marsh1994equity consider 20 groupfair objectives in total. However, using a similar technique as in the proof of Theorem 2, we can show that any deterministic strategyproof mechanism does not have a finite approximation ratio for all of the objectives mentioned in marsh1994equity except measure (1), which is in our paper (we will explore this objective in Section 3.1). The main reason is that for the other objectives in marsh1994equity containing the form such as one group cost minus the other group cost (e.g., objective type (a) with function (i) we mentioned earlier), we can easily construct profiles similar to those in the proof of Theorem 2 where the optimal value is .
3. Total and average group cost
In this section, we consider two social objectives, maximum total group cost (mtgc) and maximum average group cost (magc).
3.1. Maximum Total Group Cost
We begin with the standard median mechanism considered by Procaccia:2009aa.
Mechanism 1.
Given , put at .
Theorem 4.
Mechanism 1 is strategyproof and has an approximation ratio of for minimizing .
Proof.
We can use a similar argument to show that Mechanism 1 is strategyproof as in the classic facility location problems Procaccia:2009aa. Below, we derive its approximation ratio for the objective.
Let be the output of Mechanism 1, we observe that
Let be the optimal facility location. We have that
since is the median point which is optimal for total cost. Then we can obtain
Thus, the ratio is
∎
The following example shows that the bound given in Theorem 4 is tight.
Example 0.
If there are many groups, Mechanism 1 has a large approximation ratio. In order to derive a strategyproof mechanism with a better performance/approximation ratio, we can leverage group information and favor the group with more members. Therefore, we propose the following new groupbased mechanism which has not been considered before.
Mechanism 2.
Let (break ties by choosing the smallest index). Put facility at the median of group .
Theorem 6.
Mechanism 2 is strategyproof and has an approximation ratio of 3 for minimizing .
Proof.
Given any profile , let be the output of Mechanism 2, be the optimal location and without loss of generality we assume that and is achieved by . Then from
we have
Therefore, we have
Moreover, because there are at least agents on the left side of . Then we have the approximation ratio
∎
The following example shows that the bound given in Theorem 6 is tight.
Example 0.
Theorem 8.
Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing mtgc.
Proof.
Assume that there exists a strategyproof mechanism where the approximation ratio is smaller than . Consider a profile with one agent in group at and one agent in group at . Without loss of generality, we assume that and . Now, consider the profile with one agent in group at and one agent in group at . The optimal solution puts the facility at , which has a maximum total group cost of . If the mechanism is to achieve an approximation ratio better than 2, the facility must be placed in . In that case, given the profile , the agent in group can benefit by misreporting to 1, thus moving the facility to , in contradiction to strategyproofness. ∎
3.2. Maximum Average Group Cost
In contrast to the maximum total group cost objective, we show that the median mechanism provides a much better approximation ratio for the maximum average group cost objective.
Theorem 9.
Mechanism 1 is strategyproof and has an approximation ratio of for minimizing magc.
Proof.
Strategyproofness proof is the same as that in Theorem 4. Next we derive the approximation ratio of the mechanism. Given any profile . Let be the optimal location and without loss of generality we assume that and is achieved by . Then by
we have
Therefore, and we also have because there is at least one group with at least half group members on the left hand side of . Then we have the approximation ratio
∎
We can also use a similar argument as in the proof of Theorem 9 to show that Mechanism 2 can achieve an upper bound of since for any profile , there are at least half of members who are in the largest group and on the lefthand side of the output of Mechanism 2.
Theorem 10.
Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing magc.
Proof.
We can reuse the proof of Theorem 8 to show this lower bound since if there is only one agent in each group, the maximum average group cost is equal to the maximum total group cost. ∎
Next, we investigate whether we can improve the approximation ratio using randomized mechanisms. Recall that the classic work of Procaccia:2009aa
proposes a randomized mechanism for the maximum cost objective which puts the facility at the leftmost and rightmost points both with 1/4 probability and the middle point with 1/2 probability. However, this mechanism can perform poorly for the
objective. Consider an example with one agent at , agents at , and one agent at , with all agents in the same group. The classic mechanism achieves the of while the optimal objective value is , implying the approximation ratio is at least . In the following, we consider a randomized mechanism that modifies the abovementioned randomized mechanism to obtain a better approximation ratio.Let be a set of median agents of all groups (choose the left one if there is more than one median agent in the group) and let and . From the definition of , we know that for any profile, the optimal solution is in since all the group average costs increase from to the left and from to the right. Then putting the facility outside the interval we mentioned above with a certain probability will only hurt the mechanism’s performance. Therefore, it would be better to design a randomized mechanism which only puts the facility in . Based on this fact, we use the same probability in the randomized mechanism given in Procaccia:2009aa, but use and instead of the endpoints to give the following mechanism.
Mechanism 3.
Put the facility at with 1/4 probability, with 1/4 probability, and with 1/2 probability.
Theorem 11.
Mechanism 3 is strategyproof and has an approximation ratio of 2 for minimizing magc.
Proof.
By definition we have . First, we consider the agents whose locations are outside of . They can only change the facility location by misreporting their locations to the other side of their own group’s median point, which might further make one of and (together with the midpoint) move farther away. Therefore, they have no incentive to misreport.
For agents whose locations are in the interval , they can only change the facility location by misreporting their locations to the other side of their own group’s median point, which might further make either or move farther away. But the midpoint of two medians may be closer to them. Suppose that the median point moves by after an agent misreports and we get a new facility location distribution . Then the midpoint of two medians will move by and the cost of the misreporting agent satisfies Therefore, they have no incentive to misreport their locations. Hence, Mechanism 3 is strategyproof. Then we show the approximation ratio. Given any profile , if , our mechanism only puts the facility at , which is the optimal location. Therefore, we only consider the case where . Without loss of generality, we suppose that and let be the optimal solution.
Without loss of generality, we assume that . If is achieved by and since there are at most members of group on the right of , we have
Similarly, if is achieved by and is achieved by , we have
and
Therefore, the approximation ratio is
Furthermore, if the agent is in group , then there are at least agents of group in since is median agent. Therefore, we have
Thus, the approximation ratio is
∎
Example 0.
Consider the setting (see Figure 4) with agents in group at 0, agents in group at , and one agent in group at 1. Mechanism 1 puts the facility at achieving the of , Mechanism 3 puts the facility at 0 with 1/4 probability, 1/2 with 1/2 probability, 1 with 1/4 probability achieving the of , but the optimal solution puts the facility at achieving the of .
Theorem 13.
There does not exist any strategyproof randomized mechanism with an approximation ratio less than 3/2 for minimizing magc.
Proof.
Consider a profile with one agent in at 0 and one agent in at 1. In this case, the maximum average group cost is equivalent to the maximum cost, and we can use a similar argument as Procaccia:2009aa to show the lower bound. ∎
4. Intergroup and Intragroup Fairness
In this section, we investigate Intergroup and Intragroup Fairness (IIF), which not only captures fairness between groups but also fairness within groups. IIF is an important characteristic to be considered in the social science domain when studying fairness in group dynamics (see e.g., haslam2014social).
To facilitate our discussion, let be the average cost of agents, be the maximum cost among agents, and be the minimum cost among agents in group under profile with facility location . We define below new groupfair IIF social objectives which measure both intergroup and intragroup fairness:
Using to measure intragroup fairness is well justified since this is the maxenvy considered for one group in Cai:2016aa. For , the intergroup fairness and the intragroup fairness are two separate indicators and they can be achieved by different groups, while for , we combine these two as one indicator of each group.
The reason we do not use total group cost in this combined measure is that when the group size is large, the total cost is large but the maximum cost minus minimum cost is just the cost of one agent. Then the total cost will play a major role and intragroup fairness will be diluted, which goes against the purpose of the combined fairness measure. Moreover, since both the values of average group cost and maxenvy are in , we combine them directly without normalization.
Given the objectives, our goal is to minimize or .
Theorem 14.
Mechanism 1 is strategyproof and has an approximation ratio of 4 for minimizing .
Proof.
Let be the optimal location and we assume that without loss of generality. Then we prove that the approximation ratio
As Figure 5 shows, given any profile and for each group , we move all the agents to if they are on the left of , and to if they are on the right of . Then we will obtain a new profile with the approximation ratio . If we prove these movements do not make the optimal solution increase and do not make the mechanism solution decrease, namely , and further show that , then we can obtain the approximation ratio of .
After the movements, all and . Without loss of generality, suppose that is achieved by group . Then we can obtain
which implies the optimal solution does not increase after the movements since . Moreover, because is the median point and there exists agents on the left side of , we have .
For the mechanism solution, neither of and decreases if changes to since no agent moves closer to .
Then we have the approximation ratio is at most
∎
Theorem 15.
Mechanism 1 is strategyproof and has an approximation ratio of 4 for minimizing .
We can use a similar argument as in the proof of Theorem 14 to prove the approximation ratio since it only focuses on group and all inequalities are also valid under this objective.
In the following proofs, we use the definition below.
Definition 0.
Lu:2010aa A mechanism is partial group strategyproof if and only if a group of agents at the same location cannot benefit even if they misreport their locations simultaneously. More formally, given any profile , let and all the agents in have the same location and be a profile with the false location reported by agents in . We have where is the profiles reported by all agents except the agents in .
From the definition, we know that any partial group strategyproof mechanism is also strategyproof. Furthermore, it has been shown that any strategyproof mechanism is also partial group strategyproof in facility location problems Lu:2010aa. Thus, we will not distinguish between strategyproof and partial group strategyproof in the following analysis.
Theorem 17.
Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing .
Proof.
Assume for contradiction that there exists a strategyproof mechanism where the approximation ratio is , . According to the equivalence between strategyproofness and partial group strategyproofness in the facility location problem, is also partial group strategyproof.
Consider a profile with one agent in and agents in at , agents in and one agent in at . Assume without loss of generality that , . Now, consider the profile where one agent in and agents in are at , agents in and one agent in are at . The optimum is the average of the two locations, namely , which has an of . If the mechanism is to achieve an approximation ratio of , the facility must be placed in . In that case, given the profile , the agents at can benefit by misreporting to , thus moving the solution to , in contradiction to partial group strategyproofness. We can extend this result to groups by locating agents at () and one agent at () for each group except and in profile (). ∎
Theorem 18.
Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing .
We can reuse the example in Theorem 17 to prove the lower bound since we use the profiles where intergroup and intragroup fairness are achieved by the same group.
It is interesting to see that when one only considers intragroup fairness, only the additive approximation is possible. However, when we combine it with intergroup fairness, a tight multiplicative approximation can be obtained for and .
5. Conclusion
We consider the problem of designing strategyproof mechanisms in groupfair facility location problems (FLPs), where agents are in different groups, under several natural wellmotivated groupfair objectives. We show that not all groupfair objectives have a bounded approximation ratio. For the main groupfair objectives (i.e., maximum total group cost and maximum average group cost), we show that it is possible to design strategyproof mechanisms with bounded approximation ratios that leverage group information or randomization. We also introduce Intergroup and Intragroup Fairness (IIF), which takes both fairness between groups and within each group into consideration. We consider two natural IIF objectives and provide mechanisms that achieve tight approximation ratios.
Naturally, there are many potential future directions for the groupfair FLPs in mechanism design. For the groupfair FLPs under the considered groupfair objectives, an immediate direction is to tighten the gaps between the lower and upper bounds of our results. While we consider randomized mechanisms for a groupfair objective, it would be reasonable to consider randomized mechanisms for other groupfair objectives. Moreover, one can consider alternative groupfair objectives that are appropriate for specific application domains.
Comments
There are no comments yet.