# Strategyproof Mechanisms For Group-Fair Facility Location Problems

Ensuring group fairness among groups of individuals in our society is desirable and crucial for many application domains. A social planner's typical medium of achieving group fair outcomes is through solving an optimization problem under a given objective for a particular domain. When the input is provided by strategic agents, the planner is facing a difficult situation of achieving fair outcomes while ensuring agent truthfulness without using incentive payment. To address this challenge, we consider the approximate mechanism design without money paradigm with group-fair objectives. We first consider the group-fair facility location problems where agents are divided into groups. The agents are located on a real line, modeling agents' private ideal preferences/points for the facility's location. Our aim is to locate a facility to approximately minimize the costs of groups of agents to the facility fairly while eliciting the agents' private locations truthfully. We consider various group-fair objectives and show that many objectives have an unbounded approximation ratio. We then consider the objectives of minimizing the maximum total group cost and the average group cost. For the first objective, we show that the approximation ratio of the median mechanism depends on the number of groups and provide a new group-based mechanism with an approximation ratio of 3. For the second objective, the median mechanism obtains a ratio of 3, and we propose a randomized mechanism that obtains a better approximation ratio. We also provide lower bounds for both objectives. We then study the notion of intergroup and intragroup fairness that measures fairness between groups and within each group. We consider various objectives and provide mechanisms with tight approximation ratios.

## Authors

• 1 publication
• 14 publications
• 12 publications
• ### Mechanism Design for Facility Location Games with Candidate Locations

We study the facility location games with candidate locations from a mec...
10/19/2020 ∙ by Zhongzheng Tang, et al. ∙ 0

• ### Facility Location Games with Ordinal Preferences

We consider a new setting of facility location games with ordinal prefer...
07/04/2021 ∙ by Hau Chan, et al. ∙ 0

• ### On Discrete Truthful Heterogeneous Two-Facility Location

We revisit the discrete heterogeneous two-facility location problem, in ...
09/09/2021 ∙ by Panagiotis Kanellopoulos, et al. ∙ 0

• ### Group-Strategyproof mechanisms for facility location with Euclidean distance

We characterize the class of group-strategyproof mechanisms for single f...
08/20/2018 ∙ by Pingzhong Tang, et al. ∙ 0

• ### Mechanism Design for Facility Location Problems: A Survey

The study of approximate mechanism design for facility location problems...
06/07/2021 ∙ by Hau Chan, et al. ∙ 0

We consider the facility location problem in two dimensions. In particul...
07/02/2020 ∙ by Sumit Goel, et al. ∙ 0

• ### Strategyproof Facility Location for Three Agents

We consider the facility location problem in metric space, focusing on t...
02/21/2019 ∙ by Reshef Meir, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

Legal definitions of discrimination, such as disparate treatment Barocas:2016aa; Zimmer:1996aa and disparate impact Barocas:2016aa; Rutherglen:1987aa,111Disparate treatment and disparate impact are two main legal definitions of discrimination describing situations where an individual is intentionally discriminated or treated differently based on the individual’s classification in a protected group and a policy is affecting certain or indirectly discriminating individuals in a key group comparing to other individuals, respectively. and recent social movements (e.g., Black Lives Matter) have demonstrated the importance of group fairness in various settings. Ensuring group fairness in terms of equality or equity among groups of individuals in our society is desirable for many domains such as voting (see e.g., Endriss:2017aa; Bredereck:2018aa), fair division (see e.g., Conitzer:2019aa; Fain:2018aa; Segal-Halevi:2018aa; Todo:2011aa; Barman:2018aa; Suksompong:2018aa), matching (see e.g., Ahmed:2017aa; Benabbou:2019aa), and scarce resource allocations (e.g., kidneys and homeless youth housing Bertsimas:2013aa; Azizi:2018aa). This is especially crucial in settings where the social planner is interested in achieving socially desirable outcomes for optimization problems with predefined social objectives that provide fair opportunities and accessibility for various groups of agents. For instance, determining the best location of a public facility, such as a park or a library (see Figure 1),

to serve a subset of agents as to provide fair access to different groups (e.g., based on race, gender, or age). Or, determining the best candidate(s) to select in an election within an organization as to ensure fair representative for groups of agents are typical examples the social planner encounters frequently. The common feature that arises from many of the social planner’s problems is that the optimization problems require input (e.g., agents’ locations or agents’ candidate preferences) from the strategic agents of various groups. As a result, the social planner is facing a difficult challenge of generating group-fair outcomes while taking into account that the self-reported information from the strategic agents may not be truthful. To address this challenge, we propose to consider the approximate mechanism design without money paradigm with group-fair objectives.

In particular, we are interested in ensuring group fairness for groups of agents in settings of approximate mechanism design without money Procaccia:2009aa; Dekel:2008aa; Meir:2008aa; Meir:2009aa, the algorithmic mechanism design paradigm explicitly coined and advocated by Procaccia and Tennenholtz in 2009 Procaccia:2009aa, aiming to study the design of strategyproof and approximately optimal mechanisms without payment that achieve strategyproofness through trading off solution optimality for game-theoretic optimization problems where the problems’ inputs are (possibly untruthfully) provided by strategic agents and the problems’ optimal solutions cannot induce strategyproofness. In such a setting, the social planner aims to ensure the mechanisms’ solutions are fair to groups of agents where payment is not allowed by regulations or designs in specific structured domains (i.e., to avoid the classical social choice impossibility results Gibbard:1973aa; Satterthwaite:1975aa; Barbera:1990aa).

As our main case study, we consider the facility location problems (FLPs), the initial case study in the seminal approximate mechanism design without money work Procaccia:2009aa, where the set of agents in the problems have the well-known and studied single-peaked preferences (see e.g., Moulin:1980aa; Barbera:2001aa; Sprumont:1995aa; Schummer:2002aa; Border:1983aa; Barbera:1998aa). In the most basic FLP, each agent has a privately known location on the real line, and the optimization problem is to locate a facility (e.g., a public library, park, or candidate) on the real line to minimize some (e.g., total or max) cost objective of agents to the facility (e.g., or ). Naturally, the goal is to derive a mechanism (without payment) that elicits true location information from the agents and locates a facility to (approximately) optimize the given objective. Such a basic setting is well-understood by now – some mechanisms (e.g., median and endpoint) are strategyproof and approximately optimal for various cost objectives (e.g., total cost and maximum cost), and the subsequent works have explored the settings with more than one facility Procaccia:2009aa; Lu:2010aa; Fotakis:2014aa; paolo2015heterogeneous; DBLP:conf/aaai/ChenL020; DBLP:conf/aaim/ChenFLD20; DBLP:conf/atal/ZouL15; DBLP:conf/ijcai/Li0Z20, various objectives and settings Sui:2013aa; Sui:2015aa; Filos-Ratsikas:2017aa; Cai:2016aa; feldman2013strategyproof; limei2016newobjective, capacity constraints Aziz:2019aa; Aziz:2020aa, and automated mechanism design Narasimhan:2016aa; Golowich:2018aa. See also survey for a survey on approximate mechanism design without money for facility location problems.

In this paper, we consider the previously unexplored group-fair facility location problems in the approximate mechanism design without money setting. In such a setting, the social planner aims to design strategyproof mechanisms to locate a facility to ensure the costs of groups are fair subject to a given objective. Moreover, we address the following key questions.

(1) How should one define group-fair objectives?
(2) How should one design strategyproof mechanisms to (approximately) optimize a given group-fair objective?

Beyond the initial case study on facility location problems, approximate mechanism design without money also applied to settings such as generalized assignment Dughmi:2010aa, voting Alon:2011aa; Feldman:2016aa, fair division Cole:2013aa, classification Meir:2012aa, and scheduling Koutsoupias:2014aa.

### 1.1. Our Contribution

Motivated by the importance of ensuring group fairness among groups of agents, we consider the group-fair FLPs where the set of agents is partitioned into groups, , based on some criteria (e.g., gender, race, or age) and aim to design mechanisms that elicit true location information from the agents and locate the facility to serve groups of agents to ensure desired forms of group fairness under appropriate group-fair objectives. Although previous works in this area have not considered the notion of group fairness (e.g, for some ), a few works have considered some form of individual fairness (i.e., the maximum cost objective when ) and envy in general (e.g., see Cai:2016aa). We note that there is a line of works (see e.g., mcallister1976equity; marsh1994equity; Mulligan:1991ug) in the optimization version of FLPs that considers various group fairness objectives, which we will consider and elaborate later in the paper (see Sections 2.2 and 3.1).

We first consider several natural group-fair (cost) objectives that are motivated by existing algorithmic fairness and FLP studies (see e.g., marsh1994equity). We then introduce the objectives that capture intergroup and intragroup fairness (IIF) for agents that are within each group, which is an important characteristic to be considered in the social science domain when studying fairness in group dynamics (see e.g., haslam2014social). Our results are summarized in Table 1.

• We show that not every group-fair objective has a bounded approximation ratio.

• For the group-fair objective that aims to minimize the maximum total group cost, we show that the classic median mechanism Procaccia:2009aa is strategyproof with an approximation ratio of . In contrast to the classical setting of , the median mechanism is optimal. We then propose a new strategyproof mechanism that leverages group information and obtains an improved approximation ratio of 3. We complement our result by providing a lower bound of 2 for this objective.

• For the group-fair objective of minimizing the maximum average group cost, we show that the median mechanism is strategyproof and has an approximation ratio of 3. We also provide a lower bound of 2 for this objective. We further design a randomized mechanism with an approximation ratio of 2 and provide a randomized lower bound of 1.5.

• We introduce the notion of intergroup and intragroup fairness (IIF) that considers group fairness among the groups and within groups. We consider two objectives based on the IIF concept: and , where the first considers these two fairness measures as two separate indicators, while the second considers these two fairness measures as one combined indicator for each group. For both of them, we establish tight upper bounds (from the median mechanism) and lower bounds of . It is interesting to see that when one considers only intragroup fairness, only the additive approximation is obtainable Cai:2016aa. When we combine it with intergroup fairness, we can obtain a multiplicative approximation.

We note that while our proposed mechanisms are simple, rigorous non-trivial proofs are necessary to derive the approximation ratios. As our bounds (Table 1) have small gaps for objectives under the simple mechanisms, we did not explore more complicated mechanisms beyond the randomized one. In fact, the results are quite positive as we demonstrate the efficiency of simple mechanisms.

### 1.2. Related Work

To our best knowledge, group-fair facility location problems, where groups of agents are explicitly modeled with group-fair objectives, have not been studied before in the context of approximate mechanism design without money. In this section, we highlight works that consider notions of fairness in mechanism design settings, group fairness in fair division settings, and general diversity constraints in optimization context and applications related to our work.

#### Fair Division.

In fair division, one wishes to allocate a set of (indivisible or divisible) resources/items to a set of agents. Each agent has a utility function specifying the value for each subset of items. The goal is to ensure (1) the allocation is fair for each agent or (2) each agent does not “envy” other agents. There are measures for both (1) (e.g., proportionality and maximin share) and (2) (e.g., envy-free and its relaxations) Walsh:2020aa; Klamler:2010aa. The group envy-free concepts (see e.g., Berliant:1992aa; Varian:1974aa; Schmeidler:1972aa; Husseinov:2011aa; Conitzer:2019aa; Suksompong:2018aa; Segal-Halevi:2018aa) seek to ensure each possible pair of groups of agents (resp. each group of agents) do not envy in terms of redistribution of items from other groups (resp. other groups of agents without redistribution). These works do not consider the mechanism design settings. From the mechanism design perspective, previous works have considered fair allocation without group fairness optimization objectives for divisible and indivisible items under general welfare objectives or individual fairness objectives (see e.g., Chen:2013aa; Bei:2020aa; Bei:2017aa; Barman:2019aa; Cole:2013aa; Maya:2012aa; Amanatidis:2016aa; Aumann:2016aa; Mossel:2010aa; Sinha:2015aa).

#### Applications with Diversity Objectives or Constraints.

Optimization problems with diversity objectives or constraints have been considered in various applications such as influence maximization (see e.g., Ali:2019aa; Tsang:2019aa; Farnad:2020aa), voting (see e.g., Endriss:2017aa; Bredereck:2018aa), fair division (see e.g., Conitzer:2019aa; Fain:2018aa; Segal-Halevi:2018aa; Todo:2011aa; Barman:2018aa; Suksompong:2018aa), matching (see e.g., Ahmed:2017aa; Benabbou:2019aa), and scarce resource allocations (e.g., kidneys and homeless youth housing Bertsimas:2013aa; Azizi:2018aa; Patel:2020aa). We note that there are fairness notions for the classic stable matching problems/algorithms Gale:1962aa; Iwama:2008aa; Dubins:1981aa; Huang:2006aa (e.g., Rawlsian justice - the algorithmic ordering of the agents MASARANI:1989aa; Pini:2011aa; Romero-Medina:2001aa, procedural justice - via uniform lottery Klaus:2006aa, and sex-fair Nakamura:1995aa). Other stable matching work considers upper and/or lower quotes on each type/group and other diversity and ratio constraints in the match/settings (see e.g., Abdulkadiroglu:2005aa; Bo:2016aa; Ehlers:2014aa; Agoston:2018aa; Fragiadakis:2017aa; Hafalir:2013aa; Kominers:2013aa; Gonczarowski:2019aa; Yahiro:2020aa; Nguyen:2017aa) without group fairness optimization objectives.

#### Facility Location Problems.

We will elaborate below the most related works of facility location problems (FLPs) to ours that consider some forms of individual fairness and envy (i.e., a special case of group fairness). From the optimization perspective, early works (see e.g., mcallister1976equity; marsh1994equity; Mulligan:1991ug) in facility location problems have examined objectives that quantify various equity notions. For instance, marsh1994equity considers a group-fairness objective (i.e., the Center objective in marsh1994equity) that is equivalent to our group-fair objective and similar to our group-fair objective. However, these works do not consider FLPs from the mechanism design without money perspective.

The seminal work of approximate mechanism design without money in FLPs Procaccia:2009aa considers the design of strategyproof mechanisms that approximately minimize certain cost objectives such as total cost or maximum cost. An individual fair objective that is considered in Procaccia:2009aa is that of the maximum cost objective, which aims to minimize the maximum cost of the agent farthest from the facility (i.e., ). For the maximum cost objective, Procaccia:2009aa establishes tight upper and lower bounds of for deterministic mechanisms and for randomized mechanisms. However, applying these mechanisms to some of our objectives directly, such as and randomized part in , would yield worse approximation ratios.

In terms of envy, there are envy notions such as minimax envy Cai:2016aa; DBLP:conf/aaim/ChenFLD20, which aims to minimize the (normalized) maximum difference between any two agents’ costs, and envy ratio DBLP:conf/aaim/LiuDCFN20; ding2020facility, which aims to minimize the maximum over the ratios between any two agents’ utilities. Other works and variations on facility location problems can be found in a recent survey survey.

## 2. Preliminary

In this section, we define group-fair facility location problems and consider several group-fair social objectives. We then show that some of these objectives have unbounded approximation ratios.

### 2.1. Group-Fair Facility Location Problems

Let be a set of agents on the real line and be the set of (disjoint) groups of the agents. Each agent has the profile where is the location reported by the agent and is the group membership of agent . We use to denote the number of the agents in group . Without loss of generality, we assume that . A profile is a collection of the location and group information. A deterministic mechanism is a function which maps profile to a facility location . A randomized mechanism is a function , which maps profile to a facility location , where

is a set of probability distributions over

. Let be the distance between two points . Naturally, given a deterministic (or randomized) mechanism and the profile , the cost of agent is defined as (or the expected distance ).

Our goal is to design mechanisms that enforce truthfulness while (approximately) optimizing an objective function.

###### Definition 0.

A mechanism is strategyproof (SP) if and only if an agent can never benefit by reporting a false location. More formally, given any profile , let be a profile with the false location reported by agent . We have where is the profile reported by all agents except agent .

In the following, we discuss several group-fair social objectives that model some form of equity. We invoke the legal notions of disparate treatment Barocas:2016aa; Zimmer:1996aa and disparate impact Barocas:2016aa; Rutherglen:1987aa, the optimization version of FLPs mcallister1976equity; marsh1994equity; Mulligan:1991ug, and recent studies in optimization problems Tsang:2019aa; Celis:2018aa to derive and motivate the following fairness objectives.

#### Group-fair Cost Objectives.

We consider defining the group cost from two perspectives. One is the total group cost, which is the sum of the costs of all the group members. Through the objective, we hope to ensure that each group as a whole is not too far from the facility. Hence, our first group-fair social objective is to minimize the maximum total group cost (mtgc). More specifically, given a true profile and a facility location ,

 mtgc(y,r)=max1≤j≤m⎧⎨⎩∑i∈Gjc(y,xi)⎫⎬⎭.

Our second group-fair objective is to minimize the maximum average group cost (magc). Therefore, we have

 magc(y,r)=max1≤j≤m⎧⎨⎩∑i∈Gjc(y,xi)|Gj|⎫⎬⎭,

and we hope to ensure that each group, on average, is not too far from the facility. We measure the performance of a mechanism by comparing the objective that achieves and the objective achieved by the optimal solution. If there exists a number such that for any profile , the output from is within times the objective achieved by the optimal solution, then we say the approximation ratio of is .

### 2.2. Alternative Group-Fair Social Objectives

In addition to the objectives defined earlier, we can also consider the following natural objectives for group-fair facility location problems:

 (a)max1≤j≤m{hj}−min1≤j≤m{hj} and (b)max1≤j≤m{hj}min1≤j≤m{hj},

where is a function that can be (i) , (ii) or (iii) , which implies that each of (a) and (b) has three different group-fair objectives. In general, both (a) and (b) capture the difference between groups in terms of difference and ratio, respectively, under the desirable . For objectives under type (a), it can be seen as a group envy extended from individual envy works Cai:2016aa; DBLP:conf/aaim/ChenFLD20, and type (a) with function (i) is exactly measure (7) in marsh1994equity. For objectives under type (b), it can be seen as a group envy ratio extended from previous individual envy ratio studies DBLP:conf/aaim/LiuDCFN20; ding2020facility.

Surprisingly, we show that any deterministic strategyproof mechanism for those objectives have unbounded approximation ratios.

###### Theorem 2.

Any deterministic strategyproof mechanism does not have a finite approximation ratio for minimizing the three different objectives (i), (ii), and (iii) under (a).

###### Proof.

We prove this theorem by contradiction. Assume that there exists a deterministic strategyproof mechanism with a finite approximation ratio for those objective functions. Consider a profile with one agent in group at and one agent in group at . The optimal location is for all (i), (ii), (iii) under (a) and all of their objective values are . Therefore, has to output , otherwise the objective value for is not equal to and then the approximation ratio is a non-zero number divided by zero, which is unbounded.

Then consider another profile with one agent in group at and one agent in group at . The optimal location is for all (i), (ii), (iii) under (a) and all of their objective values are and then has to output . In that case, given the profile , the agent at can benefit by misreporting to 1, thus moving the facility location from to . This is a contradiction to the strategyproofness . ∎

###### Theorem 3.

Any deterministic strategyproof mechanism does not have a finite approximation ratio for minimizing the three different objectives (i), (ii), and (iii) under (b).

###### Proof.

We will reuse the profiles in Theorem 2 and prove this theorem by contradiction. Assume that there exists a deterministic strategyproof mechanism with a finite approximation ratio. Consider a profile with one agent in group at and one agent in group at . The optimal location is and without loss of generality, we assume that , .

Then consider another profile with one agent in group at and one agent in group at . Then can output any location except and , otherwise all (i), (ii), (iii) under (b) are unbounded but the optimal objective value is , then the approximation ratio is unbounded. In that case, given the profile , the agent in group can benefit by misreporting to , in contradiction to strategyproofness. ∎

Notice that marsh1994equity consider 20 group-fair objectives in total. However, using a similar technique as in the proof of Theorem 2, we can show that any deterministic strategyproof mechanism does not have a finite approximation ratio for all of the objectives mentioned in marsh1994equity except measure (1), which is in our paper (we will explore this objective in Section 3.1). The main reason is that for the other objectives in marsh1994equity containing the form such as one group cost minus the other group cost (e.g., objective type (a) with function (i) we mentioned earlier), we can easily construct profiles similar to those in the proof of Theorem 2 where the optimal value is .

## 3. Total and average group cost

In this section, we consider two social objectives, maximum total group cost (mtgc) and maximum average group cost (magc).

### 3.1. Maximum Total Group Cost

We begin with the standard median mechanism considered by Procaccia:2009aa.

Given , put at .

###### Theorem 4.

Mechanism 1 is strategyproof and has an approximation ratio of for minimizing .

###### Proof.

We can use a similar argument to show that Mechanism 1 is strategyproof as in the classic facility location problems Procaccia:2009aa. Below, we derive its approximation ratio for the objective.

Let be the output of Mechanism 1, we observe that

 mtgc(y,r)= max1≤j≤m⎧⎨⎩∑i∈Gjc(y,xi)⎫⎬⎭≤m∑j=1⎧⎨⎩∑i∈Gjc(y,xi)⎫⎬⎭ = n∑i=1c(y,xi)

Let be the optimal facility location. We have that

 m∑j=1∑i∈Gjc(y∗,xi)=n∑i=1c(y∗,xi)≥n∑i=1c(y,xi)

since is the median point which is optimal for total cost. Then we can obtain

 mtgc(y∗,xi)= ≥ ∑ni=1c(y,xi)m.

Thus, the ratio is

 mtgc(y,r)mtgc(y∗,r)≤∑ni=1c(y,xi)∑ni=1c(y,xi)m=m.

The following example shows that the bound given in Theorem 4 is tight.

###### Example 0.

Consider the setting (see Figure 2) with agents, all from different groups at , and agents in at . Mechanism 1 puts the facility at achieving the of , but the optimal solution puts the facility at achieving the of .

If there are many groups, Mechanism 1 has a large approximation ratio. In order to derive a strategyproof mechanism with a better performance/approximation ratio, we can leverage group information and favor the group with more members. Therefore, we propose the following new group-based mechanism which has not been considered before.

###### Mechanism 2.

Let (break ties by choosing the smallest index). Put facility at the median of group .

###### Theorem 6.

Mechanism 2 is strategyproof and has an approximation ratio of 3 for minimizing .

###### Proof.

Given any profile , let be the output of Mechanism 2, be the optimal location and without loss of generality we assume that and is achieved by . Then from

 mtgc(y∗,r) =maxj⎧⎨⎩∑i∈Gjd(y∗,xi)⎫⎬⎭≥∑i∈Gg′d(y∗,xi),

we have

 mtgc(y,r)−mtgc(y∗,r)≤∑i∈Gg′c(y,xi)−∑i∈Gg′c(y∗,xi) = ∑i∈Gg′|y−xi|−∑i∈Gg′|y∗−xi|=∑i∈Gg′(|y−xi|−|y∗−xi|) = ≤|Gg′|(y∗−y).

Therefore, we have

 mtgc(y,r)≤ mtgc(y∗,r)+|G′g|(y∗−y) ≤ mtgc(y∗,r)+|Gg|(y∗−y).

Moreover, because there are at least agents on the left side of . Then we have the approximation ratio

 ρ =mtgc(y,r)mtgc(y∗,r)≤mtgc(y∗,r)+|Gg|(y∗−y)mtgc(y∗,r) ≤12|Gg|(y∗−y)+|Gg|(y∗−y)12|Gg|(y∗−y)=3.

The following example shows that the bound given in Theorem 6 is tight.

###### Example 0.

Consider the setting (see Figure 3) with one agent in group at 1, one agent in group at 2/3, and two agents in group at 1. Mechanism 2 puts the facility at achieving the of but the optimal solution puts the facility at achieving the of .

###### Theorem 8.

Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing mtgc.

###### Proof.

Assume that there exists a strategyproof mechanism where the approximation ratio is smaller than . Consider a profile with one agent in group at and one agent in group at . Without loss of generality, we assume that and . Now, consider the profile with one agent in group at and one agent in group at . The optimal solution puts the facility at , which has a maximum total group cost of . If the mechanism is to achieve an approximation ratio better than 2, the facility must be placed in . In that case, given the profile , the agent in group can benefit by misreporting to 1, thus moving the facility to , in contradiction to strategyproofness. ∎

### 3.2. Maximum Average Group Cost

In contrast to the maximum total group cost objective, we show that the median mechanism provides a much better approximation ratio for the maximum average group cost objective.

###### Theorem 9.

Mechanism 1 is strategyproof and has an approximation ratio of for minimizing magc.

###### Proof.

Strategyproofness proof is the same as that in Theorem 4. Next we derive the approximation ratio of the mechanism. Given any profile . Let be the optimal location and without loss of generality we assume that and is achieved by . Then by

 magc(y∗,r)

we have

 magc(y,r)−magc(y∗,r) ≤ ∑i∈Gg′c(y,xi)/|Gg′|−∑i∈Gg′c(y∗,xi)/|Gg′| = ∑i∈Gg′|y−xi||Gg′|−∑i∈Gg′|y∗−xi||Gg′| = ∑i∈Gg′(|y−xi|−|y∗−xi|)|Gg′|≤y∗−y.

Therefore, and we also have because there is at least one group with at least half group members on the left hand side of . Then we have the approximation ratio

 ρ =magc(y,r)magc(y∗,r)≤magc(y∗,r)+(y∗−y)magc(y∗,r) ≤12(y∗−y)+(y∗−y)12(y∗−y)=3.

We can also use a similar argument as in the proof of Theorem 9 to show that Mechanism 2 can achieve an upper bound of since for any profile , there are at least half of members who are in the largest group and on the left-hand side of the output of Mechanism 2.

###### Theorem 10.

Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing magc.

###### Proof.

We can reuse the proof of Theorem 8 to show this lower bound since if there is only one agent in each group, the maximum average group cost is equal to the maximum total group cost. ∎

Next, we investigate whether we can improve the approximation ratio using randomized mechanisms. Recall that the classic work of Procaccia:2009aa

proposes a randomized mechanism for the maximum cost objective which puts the facility at the leftmost and rightmost points both with 1/4 probability and the middle point with 1/2 probability. However, this mechanism can perform poorly for the

objective. Consider an example with one agent at , agents at , and one agent at , with all agents in the same group. The classic mechanism achieves the of while the optimal objective value is , implying the approximation ratio is at least . In the following, we consider a randomized mechanism that modifies the abovementioned randomized mechanism to obtain a better approximation ratio.

Let be a set of median agents of all groups (choose the left one if there is more than one median agent in the group) and let and . From the definition of , we know that for any profile, the optimal solution is in since all the group average costs increase from to the left and from to the right. Then putting the facility outside the interval we mentioned above with a certain probability will only hurt the mechanism’s performance. Therefore, it would be better to design a randomized mechanism which only puts the facility in . Based on this fact, we use the same probability in the randomized mechanism given in Procaccia:2009aa, but use and instead of the endpoints to give the following mechanism.

###### Mechanism 3.

Put the facility at with 1/4 probability, with 1/4 probability, and with 1/2 probability.

###### Theorem 11.

Mechanism 3 is strategyproof and has an approximation ratio of 2 for minimizing magc.

###### Proof.

By definition we have . First, we consider the agents whose locations are outside of . They can only change the facility location by misreporting their locations to the other side of their own group’s median point, which might further make one of and (together with the midpoint) move farther away. Therefore, they have no incentive to misreport.

For agents whose locations are in the interval , they can only change the facility location by misreporting their locations to the other side of their own group’s median point, which might further make either or move farther away. But the midpoint of two medians may be closer to them. Suppose that the median point moves by after an agent misreports and we get a new facility location distribution . Then the midpoint of two medians will move by and the cost of the misreporting agent satisfies Therefore, they have no incentive to misreport their locations. Hence, Mechanism 3 is strategyproof. Then we show the approximation ratio. Given any profile , if , our mechanism only puts the facility at , which is the optimal location. Therefore, we only consider the case where . Without loss of generality, we suppose that and let be the optimal solution.

Without loss of generality, we assume that . If is achieved by and since there are at most members of group on the right of , we have

 magc(xml,r)≤ ∑i∈Gpc(xi,y∗)|Gp|+(y∗−xml) ≤ magc(y∗,r)+(y∗−xml).

Similarly, if is achieved by and is achieved by , we have

 magc(xml+xmr2,r) ≤∑i∈Gqc(xi,y∗)|Gq|+(xml+xmr2−y∗) ≤magc(y∗,r)+(xml+xmr2−y∗),

and

 magc(xmr,r)≤ ∑i∈Gsc(xi,y∗)|Gs|+(xmr−y∗) ≤ magc(y∗,r)+(xmr−y∗).

Therefore, the approximation ratio is

 ρ= 14magc(xml,r)+12magc(xml+xmr2,r)+14magc(xmr,r)magc(y∗,r) ≤ 14(magc(y∗,r)+(y∗−xml))+12(magc(y∗,r)+(xml+xmr2−y∗))magc(y∗,r) +14(magc(y∗,r)+(xmr−y∗))magc(y∗,r)=1+12(xmr−y∗)magc(y∗,r).

Furthermore, if the agent is in group , then there are at least agents of group in since is median agent. Therefore, we have

 magc(y∗,r) ≥∑i∈GR,xi>xmr(xmr−y∗)|GR|>xmr−y∗2.

Thus, the approximation ratio is

 ρ =1+12(xmr−y∗)magc(y∗,r)≤1+12(xmr−y∗)12(xmr−y∗)=2.

The following example shows that the bounds given in Theorem 9 and Theorem 11 are tight.

###### Example 0.

Consider the setting (see Figure 4) with agents in group at 0, agents in group at , and one agent in group at 1. Mechanism 1 puts the facility at achieving the of , Mechanism 3 puts the facility at 0 with 1/4 probability, 1/2 with 1/2 probability, 1 with 1/4 probability achieving the of , but the optimal solution puts the facility at achieving the of .

###### Theorem 13.

There does not exist any strategyproof randomized mechanism with an approximation ratio less than 3/2 for minimizing magc.

###### Proof.

Consider a profile with one agent in at 0 and one agent in at 1. In this case, the maximum average group cost is equivalent to the maximum cost, and we can use a similar argument as Procaccia:2009aa to show the lower bound. ∎

## 4. Intergroup and Intragroup Fairness

In this section, we investigate Intergroup and Intragroup Fairness (IIF), which not only captures fairness between groups but also fairness within groups. IIF is an important characteristic to be considered in the social science domain when studying fairness in group dynamics (see e.g., haslam2014social).

To facilitate our discussion, let be the average cost of agents, be the maximum cost among agents, and be the minimum cost among agents in group under profile with facility location . We define below new group-fair IIF social objectives which measure both intergroup and intragroup fairness:

 IIF1(y,r)= max1≤j≤m{avgc(r,Gj,y)} +max1≤j≤m{maxc(r,Gj,y)−minc(r,Gj,y)}
 IIF2(y,r)=max1≤j≤m{ avgc(r,Gj,y)+maxc(r,Gj,y) −minc(r,Gj,y)}.

Using to measure intragroup fairness is well justified since this is the max-envy considered for one group in Cai:2016aa. For , the intergroup fairness and the intragroup fairness are two separate indicators and they can be achieved by different groups, while for , we combine these two as one indicator of each group.

The reason we do not use total group cost in this combined measure is that when the group size is large, the total cost is large but the maximum cost minus minimum cost is just the cost of one agent. Then the total cost will play a major role and intragroup fairness will be diluted, which goes against the purpose of the combined fairness measure. Moreover, since both the values of average group cost and max-envy are in , we combine them directly without normalization.

Given the objectives, our goal is to minimize or .

###### Theorem 14.

Mechanism 1 is strategyproof and has an approximation ratio of 4 for minimizing .

###### Proof.

Let be the optimal location and we assume that without loss of generality. Then we prove that the approximation ratio

 ρ≤ maxj{avgc(r,Gj,y)}+maxj{maxc(r,Gj,y)}IIF1(y∗,r)≤4.

As Figure 5 shows, given any profile and for each group , we move all the agents to if they are on the left of , and to if they are on the right of . Then we will obtain a new profile with the approximation ratio . If we prove these movements do not make the optimal solution increase and do not make the mechanism solution decrease, namely , and further show that , then we can obtain the approximation ratio of .

After the movements, all and . Without loss of generality, suppose that is achieved by group . Then we can obtain

 maxj{avgc(r,Gj,y∗)}+maxj{maxc(r,Gj,y∗)−minc(r,Gj,y∗)} ≥ avgc(r,Gp,y∗)+maxc(r,Gp,y∗)−minc(r,Gp,y∗) ≥ minc(r,Gp,y∗)+maxc(r,Gp,y∗)−minc(r,Gp,y∗) = maxc(r,Gp,y∗)=avgc(r′,Gp,y∗),

which implies the optimal solution does not increase after the movements since . Moreover, because is the median point and there exists agents on the left side of , we have .

For the mechanism solution, neither of and decreases if changes to since no agent moves closer to .

Then we have the approximation ratio is at most

 (y∗+maxc(r′,Gp,y∗)−y)+(y∗+maxc(r′,Gp,y∗)−y)(y∗+maxc(r′,Gp,y∗)−y∗) =2(y∗−y)+2maxc(r′,Gp,y∗)maxc(r′,Gp,y∗)≤4(y∗−y)y∗−y=4.

###### Theorem 15.

Mechanism 1 is strategyproof and has an approximation ratio of 4 for minimizing .

We can use a similar argument as in the proof of Theorem 14 to prove the approximation ratio since it only focuses on group and all inequalities are also valid under this objective.

In the following proofs, we use the definition below.

###### Definition 0.

Lu:2010aa A mechanism is partial group strategyproof if and only if a group of agents at the same location cannot benefit even if they misreport their locations simultaneously. More formally, given any profile , let and all the agents in have the same location and be a profile with the false location reported by agents in . We have where is the profiles reported by all agents except the agents in .

From the definition, we know that any partial group strategyproof mechanism is also strategyproof. Furthermore, it has been shown that any strategyproof mechanism is also partial group strategyproof in facility location problems Lu:2010aa. Thus, we will not distinguish between strategyproof and partial group strategyproof in the following analysis.

###### Theorem 17.

Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing .

###### Proof.

Assume for contradiction that there exists a strategyproof mechanism where the approximation ratio is , . According to the equivalence between strategyproofness and partial group strategyproofness in the facility location problem, is also partial group strategyproof.

Consider a profile with one agent in and agents in at , agents in and one agent in at . Assume without loss of generality that , . Now, consider the profile where one agent in and agents in are at , agents in and one agent in are at . The optimum is the average of the two locations, namely , which has an of . If the mechanism is to achieve an approximation ratio of , the facility must be placed in . In that case, given the profile , the agents at can benefit by misreporting to , thus moving the solution to , in contradiction to partial group strategyproofness. We can extend this result to groups by locating agents at () and one agent at () for each group except and in profile (). ∎

###### Theorem 18.

Any deterministic strategyproof mechanism has an approximation ratio of at least for minimizing .

We can reuse the example in Theorem 17 to prove the lower bound since we use the profiles where intergroup and intragroup fairness are achieved by the same group.

It is interesting to see that when one only considers intragroup fairness, only the additive approximation is possible. However, when we combine it with intergroup fairness, a tight multiplicative approximation can be obtained for and .

## 5. Conclusion

We consider the problem of designing strategyproof mechanisms in group-fair facility location problems (FLPs), where agents are in different groups, under several natural well-motivated group-fair objectives. We show that not all group-fair objectives have a bounded approximation ratio. For the main group-fair objectives (i.e., maximum total group cost and maximum average group cost), we show that it is possible to design strategyproof mechanisms with bounded approximation ratios that leverage group information or randomization. We also introduce Intergroup and Intragroup Fairness (IIF), which takes both fairness between groups and within each group into consideration. We consider two natural IIF objectives and provide mechanisms that achieve tight approximation ratios.

Naturally, there are many potential future directions for the group-fair FLPs in mechanism design. For the group-fair FLPs under the considered group-fair objectives, an immediate direction is to tighten the gaps between the lower and upper bounds of our results. While we consider randomized mechanisms for a group-fair objective, it would be reasonable to consider randomized mechanisms for other group-fair objectives. Moreover, one can consider alternative group-fair objectives that are appropriate for specific application domains.