Stable Matchings with Flexible Quotas

01/12/2021 ∙ by Girija Limaye, et al. ∙ Indian Institute Of Technology, Madras 0

We consider the problem of assigning agents to programs in the presence of two-sided preferences. Each agent has to be assigned to at most one program and each program can accommodate multiple agents. However unlike the standard setting of school choice, we do not have fixed upper quotas with programs as an input – instead we abstract it as a cost associated with every program. This setting enables the programs to control the number of agents assigned to them and is also applicable in the two-round setting (Gajulapalli et al., FSTTCS 2020) where largest stable extension is computed. In this setting, our goal is to compute a min-cost stable matching that matches all the agents. We show that the problem is NP-hard even under very severe restrictions on the instance. We complement our negative results by presenting approximation algorithms for general case and a fast exponential algorithm for a special case.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we consider the problem of assigning agents to programs where agents need to be assigned to at most one program and each program can accommodate multiple agents. Both agents and programs rank a subset of elements from the other set. Typically, each program specifies an upper-quota denoting the maximum number of agents that can be assigned to the program. This setting models important real-world applications like assigning students to schools [1], residents (medical interns) to hospitals [9], under-graduate students to university programs [3] and students to elective courses [16] to name a few. In many scenarios the procedure of assigning agents to programs operates as follows: agents and programs submit their preferences and quotas to a central administrative authority and the central authority outputs the assignment. An assignment or matching is stable if no agent-program pair has incentive to deviate from .

In the above setting, the task of assigning agents to programs by the central authority is complicated by the presence of high demand programs with limited quotas. Consider the instance in Fig. 1 with five agents and two programs . Upper-quota of is and that of is . Preferences are interpreted as follows - Agent prefers over , and so on. We note that is a popular program since four agents rank the program as top-choice. In this instance, any stable matching with the given quotas leaves agents and unmatched. However, in many applications like school-choice [1] and elective allocation to students, it is undesirable to leave agents unmatched. Furthermore, the quotas specified by the programs are typically derived from some practical considerations like class size and resources and need not be rigid. Thus, in order to enable a larger set of agents to be matched, programs may be willing to increase the quotas, depending on the resources. To model such scenarios, we introduce and study the notion of stable matchings with flexible quotas. If quotas were completely flexible, a simple solution would be to match every agent to its top choice program. In the instance in Fig. 1

this leads to the skewed matching

in which is matched to and the rest of the agents are matched to . This is unacceptable as well in practice. Thus in our model, we let the costs control quotas.

Figure 1: An example instance with sets , , upper-quotas and preferences

Controlling quotas via costs. Suppose that instead of fixed quotas, a cost is associated with every program. The cost specifies the effective cost of matching a certain number of agents to the program. Since the quotas are controlled by the cost and are not rigid, every agent could be matched. In the instance in Fig. 1, instead of providing quotas, assume that the programs specify the following information: the cost of matching a single agent to is and cost of matching a single agent to agent to is . In practice, high-demand or popular courses may have a higher cost per matched agent. Our goal is to compute a stable matching in the instance that matches every agent and has minimum cost. We call this as the problem of computing stable matchings with flexible quotas () problem. We remark that our problem is significantly different from the well-studied minimum weight or maximum weight stable matching problem [13] since we have flexible quotas. In the instance in Fig. 1 it is easy to verify that when there are no initial quotas, the matching is stable, and has a cost of and is indeed a min-cost stable matching given the costs for the programs.

Formally, in the problem we are given a set of agents and a set of programs , their preference lists and the cost associated with each program. Our goal is to compute an -perfect stable matching (in which all agents are matched) with the minimum total cost. We note that in our model, since costs control quotas of programs, in some cases, programs may even be closed, that is, no agent gets assigned to the program in the output matching. However, our output matching is guaranteed to be -perfect and stable.

Length of the preference lists in the setting. We note that the problem is trivially solvable in polynomial time when all preference lists are of unit length. problem is also polynomial time solvable when the preference lists are complete (all agents list all programs) using the following idea - pick a program with the least cost and match every agent to it. It is clear that this computes an -perfect, stable matching. Also, it is easy to see that the matching is optimal. Since we guarantee that every agent is matched, a natural strategy for agents is to submit short preferences, in fact agents may simply submit only their true top-choice program. However, since -perfectness is promised the central authority may impose a lower-bound on the number of preferences submitted by an agent [12]. We show that is -hard even in this case. We also consider whether the problem is tractable when the number of distinct costs in the instance is small. However, we prove that the hardness holds even under this case. is -hard even when every agent has a preference list of length exactly for some constant and This hardness result holds even when there are distinct costs in the instance. We also show that is hard to approximate within a constant factor, unless . cannot be approximated within a factor unless . This hardness of approximation holds even when there are distinct costs in the instance. We present a fast exponential algorithm for the instances where number of distinct costs is small. Note that the number of distinct costs appearing in an agent’s preference list is upper-bounded by the number of distinct costs in the instance as well as by the length of the preference list of the agent. can be solved in time time where is the maximum number of distinct costs that appear in an agent’s preference list. For general case of problem, we present the following two approximation algorithms: For the General case, problem admits the following two approximation algorithms.

  1. -approximation algorithm

  2. a linear time -approximation algorithm, where denotes the maximum length of preference list of any program.

We also present better approximation guarantee for the instances in which agents have short preference lists. admits a -approximation algorithm when agents have two programs in their preference list.

Table 1 summarizes our results.

Problem Complexity Algorithmic results
-hard when agents list length is exactly ,
Inapproximable within unless
algorithm,
-approximation and -approximation for arbitrary instances,
-approximation for agent list length
Table 1: Summary of results

Stable extensions. A recent work by Gajulappali et al. [8] studies a two round mechanism for school-choice where the student-optimal stable matching is computed with initial parameters in the first round. In the second round, some parameters of the problem change, for instance, schools may add more seats (increase the quotas), new students may arrive, new schools may open, and so on. In the Type A1 setting [8], the goal in the second round is to compute a largest stable extension of by appropriately increasing the quotas of schools. Gajulappali et al. [8] present a polynomial time algorithm for this setting. For the instance in Fig. 1, matching is the student-optimal matching. Algorithm in [8] computes as the stable extension of .

An instance may admit multiple stable matchings and by the Rural Hospital Theorem [18], the set of agents unmatched in the first round is independent of the stable matching. However, the subset of these agents that can be matched in the second round to obtain a stable extension is dependent on the specific matching computed in the first round. In this work, we show that every largest stable extension of a fixed stable matching matches the same set of agents and that among all the stable matchings computed in the first round, the largest stable extension of the student-optimal matching has maximum size.

Largest min-cost stable extension. We observe that in the second round, quota is added to the programs that are fully-subscribed in the first round. Thus, additional number of agents that get assigned in the second round may cause overhead to these programs. We note that the stable extension computed in [8] match every agent that can be matched in the second round to her top-choice. This may result in a skewed matching if a subset of programs are in high demand. In this work, we extend the Type A1 setting [8] using the model. That is, in addition to the initial quotas, each program specifies a cost which is used to control the increments in the quota in the second round and the goal is to compute a largest stable extension at minimum cost. In the instance in Fig. 1, if ’s cost is and ’s cost is , then the largest min-cost stable extension is . Note that the assignment of unmatched agents in is less skewed in than in .

We note that Gajulappali et al. [8] consider a variant of problem (Problem 33, Section 7) and state that their problem is -hard. However, that is not a central problem studied in their work and they do not investigate the computational complexity of the problem in detail.

1.1 Related Work

The size of a matching is an important criteria in many applications and in this direction, relaxations to stability like the notion of popularity [11], maximum matchings with least number of blocking pairs [5], have been studied. We have already mentioned the work by Gajulappali et al. [8] in which the instance in the second round has flexible quotas. Flexible quotas in the college admission setting are studied in [17]. In their setting, students have strict preferences but colleges may have ties, that is, colleges may have more than one student at the same rank. They consider that colleges have fixed quotas initially and flexible quotas are used for tie-breaking at the last matched rank. In their work, no costs are involved. In a different setting of college admissions, [4] study the problem of assigning students to colleges where colleges have a lower quota and a college either fulfills the lower quota or is closed. In their setting, the stability notion also considers closed schools. Under the modified notion of stability, a stable matching may fail to exist and they show that deciding if a stable matching exists is -hard.

Budget and funding constraints are also studied in [14, 2]. In the setting where courses make monetary transfers to the students and have a budget constraint is studied in [14]. Unlike the standard setting, stable matchings may not exist for their setting. They present approximately stable matchings using matching with contracts. Funding constraints are also studied in [2] in the context of allocating student interns to the projects funded by supervisors. They present the concept of strong stability and weak stability and present an algorithm for computing a weak stable matching. Course allocation problem with high-demand courses is studied in [16, 10, 19]. A setting involving high-demand courses and scheduling constraints is studied in [16] which assumes a fixed quota at courses. Course allocation involving efficient assignment of students to highly-popular courses is treated with the AI approach in [10]. Course bidding mechanisms for allocating seats efficiently at high-demand courses are investigated in [19].

Organization of the paper: In section 2, we give an overview of stable matchings and stable extensions and define the problem setup. We present our algorithmic results for the problem in section 3. In section 4, we present -hardness and inapproximability results for the problem. We conclude in section 5.

2 Preliminaries and Background

In this section, we first present the stable matchings in the classical single round setting and define the notation used in this paper. We then formally define the problem followed by discussing some properties of the stable extensions [8] in the two-round setting.

2.1 Classical stable matchings

We are given a set of agents (students, applicants, residents and so on) and a set of programs (schools / courses / colleges, posts, hospitals, and so on) . Each agent and program ranks an arbitrary subset (also called acceptable subset) from the other side in a strict order. This ranking of elements is called as the preference list of the element. An agent is acceptable to program if and only if is acceptable to . A program has an associated upper-quota denoted by . If prefers over , we denote it by .

Such an instance can be modelled as a bipartite graph such that if and only if and are mutually acceptable to each other. A matching in is an assignment of agents to programs such that each agent is matched to at most one program and a program is matched to at most many agents. Let denote the program that agent is matched to in and denote the set of agents matched to program in matching . If is unmatched in , we denote it by . We consider as a less-preferred choice for any when compared to any program in her acceptable subset. A program is called under-subscribed in if and fully-subscribed in if . In this setting, stability is defined as follows. [Classical stable matchings] A pair is a blocking pair w.r.t. the matching if and is either under-subscribed in or there exists at least one agent such that . A matching is stable if there is no blocking pair w.r.t. . It is well known that every instance of the stable matching problem admits a stable matching and it can be computed in linear time by the well-known Gale and Shapley [9] algorithm.

2.2 problem setup

In the problem, we are given a set of agents and a set of programs , their preference lists and the cost associated with each program. We assume that the costs are integral. Our goal is to compute an -perfect stable matching that achieves the minimum cost. In our model there are no fixed quotas hence no program is under-subscribed. In fact some programs may have no agents assigned to them. We denote such programs as closed. We modify the definition of stability in our setting as follows and throughout the paper we refer to this notion of stability. [Stable matchings with flexible quotas] A pair is a blocking pair w.r.t. the matching if and there exists at least one agent such that . A matching is stable if there is no blocking pair w.r.t. . In literature, such a blocking pair is also considered as an envy pair and the matching is called envy-free. In the standard setting, an envy-free matching need not be stable but in our setting of flexible quotas, envy-free matchings are indeed stable. A stable matching in our setting is Pareto cost-optimal if no agent can be promoted (that is, matched to a higher-preferred program) without increasing the cost of the matching.

2.3 Stable extensions

In this section, we define the notion of stable extensions [8] and present some properties of largest stable extensions. Let denote the classical stable matching instance, that is, the bipartite graph along with the preference lists and quotas. Let be a stable matching in . A stable extension of is defined as follows. [Stable extension] Matching is an extension of if no agent matched in changes her matched program, that is, . An extension of is a stable extension if is stable. We note that in no unmatched agent forms a blocking pair with an under-subscribed program. If agent is unmatched in but matched in to program , then must be fully-subscribed in and the number of residents matched to in is larger than its initial quota.

Properties of largest stable extensions. Let be the set of agents that can be matched in a largest stable extension of . For the instance given in Fig. 1, if then . Gajulappali et al. [8] present a polynomial time algorithm to compute (denoted as the set in line 3, Fig. 1 [8]) and then match every agent in . Below, we present an equivalent algorithm (Algorithm 1) to compute the set . We use the notion of barrier as defined in [8] which is similar to the notion of threshold resident defined in [15]. Let be the set of agents unmatched in the given matching . Then . For every program , we prune the preference list of by deleting the edges of the form such that appears after Barrier() in ’s list. We denote this pruned graph after the for loop in line 5 as . We return the set as the set of agents who are not isolated in . We note the following properties of the set .

1:Input: Stable matching instance and a stable matching in
2:Output: Largest stable extension set
3:Let be the set of agents unmatched in
4:Let
5:for every  do
6:     for every  do
7:         if  Barrier(then
8:              Delete edge in               
9:Let be the set of agents in with degree at least one in
10:return
Algorithm 1 Algorithm to compute

P1. For the stable matching , the set is unique. This is clear by observing that Barrier() for each is unique with respect to the stable matching .∎

Next we note that the set is independent of the stable matching but the set depends on a specific stable matching . In the instance in Fig. 1, we saw that if then but if then . We show the following property about the largest stable extensions of different stable matchings.

P2. In the Type A1 setting [8], among all the stable matchings in the first round, the agent-optimal matching achieves the largest stable extension.∎

To prove P2, it is enough to show that the agent-optimal stable matching has the largest set . Let be the agent-optimal stable matching. Then for any stable matching , .

Proof.

Suppose for the contradiction that, . Let and be the pruned graphs after for loop in line 5 in Algorithm 1 w.r.t. and respectively. Thus there exists an agent ; that is, agent has degree at least one in but degree in . Let . Thus, there exists agent such that . But since , we have . But the agent-optimality of implies that for every agent . This contradicts that . Hence, . ∎

We also note the following stronger property about .

P3. The set of edges deleted in is a superset of set of edges deleted in the pruned graph of any other stable matching .∎

To see P3, observe that for agent , . Thus, if Barrier() in then Barrier() in . Thus, . We can generalize P2 as follows.

P4. If the underlying sets are and and if quotas for elements in set are increased in the second round, then -optimal stable matching achieves the largest stable extension.∎

Computing a largest min-cost stable extension. In addition to the initial quotas if the programs also specify the costs then in the second round of Type A1 setting [8], a largest min-cost stable extension is given by an optimal solution to the following instance - The set of programs remain the same. The set of agents is same as the set and the preference lists are restricted to the subset of edges in .

2.4 Our techniques

We note that a simple lower bound on an optimal solution of exists, which is computed by summing up the cost of a least-cost program appearing in every agent’s preference list. We use this lower bound for the -approximation algorithms and show that the analysis of our algorithms is tight. We also present a weaker lower bound for using an auxiliary problem and derive a

-approximation algorithm using it. We present a linear program for the

problem and use LP rounding for the restricted case when agents have short preference lists.

3 Algorithmic results

In section 3.1, we present a fast exponential algorithm for the problem when the number of distinct costs in the instance is small. In section 3.2 we first present an approximation algorithm with guarantee using a new auxiliary problem. Then we present two simple approximation algorithms that have the same approximation guarantee of where is the length of longest preference list of a program. These algorithms work on arbitrary instances. In section 3.3, we present a -approximation algorithm for the restricted instances where the agent’s preference list has programs.

3.1 Exact exponential algorithm for

In this section we present an exact exponential algorithm for with running time where is the maximum number of distinct costs that appear in an agent’s preference list. Let be the number of distinct costs in the given instance and let be the set of distinct costs. Then . Also where is the maximum length of an agent’s preference list. Our algorithm (Algorithm 2) considers every possible -tuple of costs such that each for some . Thus, there are tuples. For each tuple, the algorithm constructs a sub-graph such that every agent has edges incident to programs with cost exactly . For any agent if is the highest preferred program neighbouring in then any program in the graph cannot be matched with any agent . The algorithm prunes the graph to remove such edges repeatedly. If an agent is isolated after the pruning, the current tuple is discarded. Otherwise, the algorithm matches every agent to the top choice program in this pruned graph. The algorithm picks a least-cost matching among the matchings computed for the non-discarded tuples and returns .

1:
2:for every tuple  do
3:      has cost
4:     let change = 1
5:     while every agent has degree and change = 1 do
6:         let change = 0
7:         for every  do
8:              let be the top-preferred program such that
9:              for every  do
10:                  Delete for every
11:                  let change = 1                             
12:     if every agent has degree  then
13:          is the top-preferred program such that
14:         if cost of  then
15:               cost of               
16:return
Algorithm 2 algorithm for

Correctness. First note that for tuple , if an agent has degree after the pruning of (line 12) then no matching is computed for . In this case we say that is invalid, otherwise is valid. Algorithm computes a matching for each valid tuple and the min-cost matching among these is returned. Consider the cost tuple where each agent’s cost corresponds to the cost of top choice program in her list in . When the algorithm processes this tuple, it will result in the graph such that no deletions happen in line 10. Thus, there is at least one valid tuple and hence, Algorithm 2 computes at least one -perfect matching. The matching computed by Algorithm 2 is stable.

Proof.

The matching is actually matching computed for some valid tuple. Thus, it is enough to show that computed for an arbitrary valid tuple is stable.

Suppose is not stable, then there exists agents such that and and . If at line 13 then , a contradiction. Hence, . If after line 3 then note that edge is deleted at line 10. Otherwise, was deleted in some iteration of the edge deletion loop. This implies that there exists that triggered this deletion. But, then we have that , implying that is also deleted. Thus, in both the cases, and hence . This contradicts the assumption that and hence completes the proof. ∎

Thus we showed that the algorithm computes at least one -perfect, stable matching. We now show that the algorithm computes the min-cost (optimal) matching.

Let be an -perfect stable matching and be the tuple corresponding to . Then no edge in is deleted when Algorithm 2 processes .

Proof.

Suppose for the sake of contradiction that an edge in is deleted when Algorithm 2 processes the tuple . Let be the first edge in that gets deleted during the course of the algorithm. The edge is in after line 3 since has the same cost as given by the tuple . This implies that the edge is deleted while pruning the instance. Suppose agent caused the deletion of edge at time . Let . Then it is clear that either or , otherwise is not stable. But since triggered the deletion of , the top choice program adjacent to in at time is less preferred than . Again note that after line 3 hence must have been deleted at time earlier than . This contradicts the assumption that is the first edge in that gets deleted. This completes the proof. ∎

Thus, when the algorithm processes the tuple corresponding to an -perfect stable matching, every agent has a degree at least at line 12, that is, that tuple is valid. Thus, the tuple corresponding to an optimal matching is also valid and hence Algorithm 2 computes matching for it. Since Algorithm 2 returns the matching with cost at most the cost of , it implies that it returns a min-cost -perfect stable matching.

Running Time. Algorithm 2 processes tuples. For each tuple it computes the graph in time, where is the number of edges in . The while loop can be efficiently implemented by keeping track of the most-preferred program considered so far for every agent. The while loop thus takes time because it deletes edges. Matching can be computed in time . Hence the Algorithm 2 runs in time .

This establishes Theorem 1.

3.2 Approximation algorithms for arbitrary instances

In this section, we present two approximation algorithms for arbitrary instances. The first algorithm has an approximation guarantee of . The next two approximation algorithms have a guarantee of where is the length of longest preference list of a program.

3.2.1 -approximation

In this section we present an approximation algorithm for with ratio . We define an auxiliary optimization problem and present a polynomial time algorithm for . We then claim a lower bound and an approximation guarantee using the optimal solution of .

The problem. We are given an instance of . In problem, our goal is to compute an -perfect stable matching that minimizes the maximum cost spent at a program. Recall that in the problem, our goal is to compute an -perfect stable matching with minimum total cost.

Now we show that is in . See Algorithm 3. We start with an empty matching . Since costs can be , the minimum cost spent at a program can be . If is the maximum cost in , then any -perfect matching has cost at most . We start with the range where and . Let . We set upper-quotas at every program such that the maximum cost spent at a program is . We then compute a stable matching using Gale and Shapley [9] algorithm. If is not -perfect then we search for the optimal cost value in the range . Otherwise, we set upper-quotas at every program such that maximum cost spent at a program is at most and compute a stable matching . If is not -perfect then we return , otherwise we search for the optimal cost value in the range . We prove the correctness of the algorithm below.

1:
2:Let where is maximum cost of a program
3:while true do
4:     
5:     For every program, set the maximum quota that can be accommodated in cost .
6:      = stable matching on the above instance
7:     if  is -perfect then
8:         For every program, set the maximum quota that can be accommodated in cost .
9:          = stable matching on the above instance
10:         if  is not -perfect then
11:              return
12:         else
13:                        
14:     else
15:               
Algorithm 3 Algorithm to compute an optimal solution of

The matching computed by Algorithm 3 is -perfect and stable.

Proof.

By line 6 and line 7, the claim follows. ∎

Suppose is the maximum cost spent at a program in an optimal solution of . Let be the instance in which every program has maximum upper-quota that can be fulfilled in cost at most , that is, the maximum cost spent at a program is . Then, for are infeasible instances of and are feasible instances of .

The matching computed by Algorithm 3 is an optimal solution.

Proof.

The claim follows from Remark 3.2.1 since Algorithm 3 computes the matching by binary searching over the range . ∎

Running time. Since it uses binary search, it takes iterations. In each iteration it sets the quota in time, computes at most two stable matchings in time and a performs a constant number of operations, thus every iteration takes where is the number of edges in the underlying bipartite graph of the instance. Thus, the algorithm runs in time .

-approximation for . Suppose is the cost of optimal solution of , that is, there exists an -perfect stable matching in such that is the maximum cost spent at a program in and for every , there does not exist a stable -perfect matching where cost spent at every program is at most . Then we show that is a lower bound on the cost spend in an optimal solution of (Claim 3.2.1). Using this lower bound, we show the approximation guarantee of for the output of Algorithm 3 for (Claim 3.2.1).

The cost of an optimal solution of is at least .

Proof.

Suppose for the sake of contradiction that cost of the optimal solution of is . Note that is an -perfect stable matching, implying that is a feasible solution for . Also note that has the total cost which is the summation of costs spent at every program. Thus, the cost spent at any program in is at most , implying that the maximum cost spent at a program in is at most . This implies that itself is an optimal solution for , contradicting that is the maximum cost spent at a program in an optimal solution of . ∎

Let be the instance of . Matching computed by Algorithm 3 on is an -approximation of on .

Proof.

Let be an optimal solution of . Matching computed by Algorithm 3 is -perfect and stable. Let be the maximum cost spent at a program in . Thus the total cost of matching is at most . By the claim 3.2.1, , thus is an -approximation for . ∎

This establishes Theorem 1 a.

Remarks about Algorithm 3. We note that the actual total cost of the optimal matching of is at most where is the number of programs that are open. Since , the analysis is not tight with respect to the factor . However consider the instance in Example 3.2.1 where the total cost of optimal matching of is exactly . Suppose there are 3 agents and programs . Cost of is and that of is . Preference lists are shown below. It is clear that the optimal solution for is with cost . The optimal solution of is with cost . The number of open programs in is and the total cost of is .

3.2.2 -approximation

In this section we present two linear time algorithms denoted by ALG1 and ALG2 for the problem. We show that both the algorithms have an approximation guarantee of , where denotes the length of the longest preference list of any program. We show that there exist simple examples where one of them is better that the other. Hence, in practice, we run both the algorithms and output the matching that has minimum cost amongst the two. For our algorithms we need the following definition. Let denote the least cost program in the preference list of agent . If there is more than one program with the same minimum cost, we let be the most-preferred amongst these programs.

Description of ALG1: Given an instance , we construct a subset of such that iff for some agent . Our algorithm now matches every agent to the most-preferred program in . The pseudo-code can be found in Algorithm 4.

1:let and for some }
2:let is the most-preferred program in ’s preference list
3:return
Algorithm 4 ALG1 for SMFQ

Analysis of ALG1: It is clear that the matching computed by ALG1 is -perfect. Let be the output of ALG1 and be an optimal matching. Let and be the cost of matching and respectively. It is easy to see that

We show the correctness and the approximation guarantee of ALG1 via Lemma 3.2.2 and Lemma 3.2.2.

The output of ALG1 is stable.

Proof.

We show that no agent participates in a blocking pair w.r.t. . Recall that no program is under-subscribed w.r.t. . Thus if blocks , it implies that and there exists an agent such that . Since ALG1 assigns agents to programs in only, it implies that . However, is the most-preferred program in and hence . Thus, the claimed blocking pair does not exist. ∎

The output of ALG1 is an -approximation.

Proof.

In the matching , agent is either matched to or for some other agent . This is determined by the relative ordering of and in the preference list of . We partition the agents as , where is the set of agents matched to their own least cost program, that is, iff . We define . We can write the cost of as follows:

We now observe that any program that is a least cost program for some agent in can be matched to at most many agents from . Thus, the cost of is upper bounded as follows:

This proves the approximation guarantee. ∎

We now present our second algorithm.

Description of ALG2: Given an instance , ALG2 starts by matching every agent to . Note that such a matching is -perfect and min-cost but not necessarily stable. Now the algorithm considers programs in an arbitrary order. For program , we consider agents in the reverse preference list ordering of . Note that if there exists agent such that and there exists such that , then envies and forms a blocking pair. We resolve this by promoting from to . The algorithm stops when we have considered every program. The pseudo-code can be found in Algorithm 5.

1:let and
2:for every program  do
3:     for  in reverse preference list ordering of  do
4:         if there exists such that envies  then
5:                             
6:return
Algorithm 5 ALG2 for

We observe the following about ALG2. An agent only gets promoted in the loop at line 5 of the algorithm. Further, a program is assigned agents only when it is being considered in the for loop at line 2. Finally, if a program is assigned at least one agent in the final output matching, then for some agent .

Before presenting the correctness and approximation guarantee of ALG2, we present following instances which illustrate neither of the two algorithms is strictly better than the other.

Let , , where is some large positive constant. The agents have the same preference list followed by . Whereas agent has only in its preference list. The preferences of the programs are as given below.

Here, ALG1 outputs of cost where . In contrast, ALG2 outputs whose cost is . Clearly, ALG2 outperforms ALG1 in this case and in fact is optimal for the instance.

Let , , where is some large positive constant. The preferences of agents are followed by followed by . The preference list of contains only and the preference list of contains only . The preferences of programs are as shown below.

Here, ALG1 outputs whose cost is . In contrast, ALG2 outputs of cost where . In this instance ALG1 outperforms ALG2 and it can be verified that is the optimal matching.

Analysis of ALG2: It is clear that the matching computed by ALG2 is -perfect. We show the correctness and the approximation guarantee of ALG1 via Lemma 3.2.2 and Lemma 3.2.2. The matching output by ALG2 is stable.

Proof.

Let be the output of ALG2. We show that no agent participates in a blocking pair w.r.t. . Assume for contradiction, that blocks . Then and there exists an agent such that . Consider the iteration of the for loop in line 2 when was considered. Either was already matched to (before this iteration) or is assigned to in this iteration. Note that prefers over and the agents are considered in reverse order of ’s preferences. Thus if was matched in that iteration to a lower-preferred program than , then must be promoted to . Otherwise, was already matched in that iteration to a better-preferred program. Since agents can only get promoted in subsequent iterations, it contradicts that at the end of the algorithm, agent prefers to . This completes the proof of stability. ∎

The matching output by ALG2 is an -approximation.

Proof.

The proof is similar to the proof of Lemma 3.2.2. Let be the cost of matching output by Algorithm 5. The lower bound on is exactly the same. In the matching , some agents are matched to their least cost program (call them ), whereas some agents get promoted (call them ). However, as noted earlier, if a program is assigned agents in then it must be for some agent . Thus for agent who is not matched to its own least cost post, we charge the cost of some other least cost post . Since a least cost post can be charged at most times by agents in , by a similar argument as in Lemma 3.2.2 we get the approximation guarantee of . ∎

This establishes Theorem 1 b.

Remarks about Algorithms ALG1 and ALG2. We note that our analysis of ALG1 and ALG2 is tight by giving the following example on which both algorithms compute an exact -approximation. Suppose there are agents and programs . Cost of is and cost of and is . Preference lists are shown below.

Algorithm ALG1 and ALG2 both compute matching that has cost of and the optimal matching has cost . Note that . We also note that both our algorithms compute Pareto cost-optimal matchings.

3.3 Approximation algorithms for short preference lists of agents

We consider the instance of where agents have preference lists of length exactly two. This instance is -hard as can be seen from Theorem 1. We give a simple deterministic LP rounding algorithm for this case that achieves a -approximation. We have an LP variable for every edge in the underlying bipartite graph. Consider the following linear program for computing a min-cost -perfect stable matching. Since the upper quotas are not fixed, for stability it is enough to have the following - If a program is matched to an agent then every agent must be matched to a program at least as preferred as by . This is captured by the first constant (Eq. 2). The second constraint (Eq. 3) captures that the desired matching is -perfect and the third constraint (Eq. 4) is a non-negativity constraint on LP variables. The objective (Eq. 1) is to minimize the cost of the matching, computed as a sum over all programs , the cost multiplied by the number of agents matched to .

minimize

(1)

subject to

(2)
(3)
(4)

Deterministic rounding. Suppose is an LP optimal solution. We construct a matching as follows. Initially . For every agent , we do the following - Let and be the two programs in the preference list of agent in that order. If then we add to , otherwise we add to .

Matching is -perfect and stable.

Proof.

For every agent , it is guaranteed (by Eq. 3) that either or . Thus, every agent is matched in .

Now we show that is stable. For contradiction, assume that blocks . This implies that there exists an agent such that and . Let . Since