1. Introduction
Fairness is one of the most fundamental requirements in many multiagent systems. Fair division, in particular, deals with allocation of resources and alternatives in a fair manner by cutting across a variety of fields including computer science, economics, and artificial intelligence. Traditionally, fair division has been concerned with the allocation of
goods that are positively valued by agents, leading to a plethora of fairness notions, axiomatic results, and computational studies (see Brandt et al. (2016) and Moulin (2019) for detailed discussions). However, many practical problems require the distribution of a set of negatively valued items (aka chores). These problems range from assigning household chores or distributing cumbersome tasks to those involving collective ownership responsibility Risse (2008) in humaninduced factors such as climate change Traxler (2002), nuclear waste management, or controlling gas emissions Caney (2009). The problem of allocating chores is crucially different from allocating goods both from axiomatic and computational perspectives. For instance, while goods are freely disposable, chores must be completely allocated. These fundamental differences have motivated a large number of recent works in fair division of divisible Bogomolnaia et al. (2019); Chaudhury et al. (2020) and indivisible chores (Aziz et al., 2019a; Freeman et al., 2020; Aziz et al., 2019c, 2017).When dealing with indivisible items, a compelling fairness notion is the Maximin Share (MMS) guarantee—proposed by Budish (2011)—which is a generalization of the cutandchoose protocol to indivisible items Brams and Taylor (1996). An agent’s outof maximin share value is the value that it can guarantee by partitioning items into bundles and receiving the least valued bundle. Unfortunately, the outof MMS allocations may neither exist for goods Kurokawa et al. (2018); Feige et al. (2021) nor for chores Aziz et al. (2017). These nonexistence results, along with computational intractability of computing such allocations, have motivated multiplicative approximations of MMS wherein each agent receives an fraction of its outof MMS value when dealing with goods (Ghodsi et al., 2018; Garg and Taki, 2020; Garg et al., 2018), or approximation of its outof MMS value when dealing with chores (Aziz et al., 2017; Barman and Krishna Murthy, 2017; Huang and Lu, 2021).
In this paper, we initiate the study of ordinal MMS approximations for allocating chores. The goal is finding an integer for which outof MMS exists and can be computed efficiently. Recently, ordinal approximations of MMS for allocating ‘goods’ have received particular attention as natural guarantees that provide a simple conceptual framework for justifying approximate decisions to participating agents: partition the items in a counterfactual world where there are agents available Babaioff et al. (2019, 2021); SegalHalevi (2020); Hosseini and Searns (2021); Elkind et al. (2021c). Since these approximations rely on ordinal rankings of bundles, they are generally robust against slight changes in agent’s valuation profiles compared to their multiplicative counterparts (see Appendix A for an example and a detailed discussion). Focusing on ordinal approximations, we discuss key technical differences between allocating goods and chores, and highlight practical computational contrasts between ordinal and multiplicative approximations of MMS.
1.1. Contributions
We make the following theoretical and algorithmic contributions.
An algorithm for outof Mms
We show that heuristic techniques for allocating goods do not carry over to chores instances (
Section 3), and develop other techniques to upperbound the number of large chores (Lemma 3.8). Using these techniques, we develop a greedy algorithm that achieves outof MMS approximation for chores (Theorem 4.1). The algorithm runs in stronglypolynomial time: the number of operations required is polynomial in the number of agents and chores.Existence of outof Mms
We show the existence of outof MMS allocation of chores (Theorem 5.1). The main technical challenge is dealing with large chores that requires exact computation of MMS values, rendering our algorithmic approach intractable. While our technique gives the best known ordinal approximation of MMS, it only provides a tight bound for small instances (Example 5.4) but not necessarily for larger instances (Proposition 5.3).
Efficient approximation algorithm
We develop a practical algorithm for approximating the outof MMS bound for chores. More specifically, our algorithm guarantees outof MMS for (Theorem 6.2) and runs in time polynomial in the binary representation of the input.
1.2. Related Work
MMS for allocating goods
The notion of maximinshare originated in the economics literature. Budish (2011) showed a mechanism that guarantees outof MMS to all agents by adding a small number of excess goods. Whether or not outof MMS can be guaranteed without adding excess goods remains an open problem to date.
In the standard fair division settings, in which adding goods is impossible, the first nontrivial ordinal approximation was outof MMS (AignerHorev and SegalHalevi, 2022). Hosseini and Searns (2021) studied the connection between guaranteeing 1outof MMS for of the agents and the ordinal approximations for all agents. The implication of their results is the existence of outof() MMS allocations and a polynomialtime algorithm for . Recently, a new algorithmic method has been proposed that achieves this bound for any number of agents (Hosseini et al., 2021). The ordinal approximations have been extended to outof MMS to guarantee that each agent receives at least as much as its worst bundles, where the goods were partitioned into bundles (SegalHalevi, 2019; Babaioff et al., 2019). The maximin share and its ordinal approximations have also been applied to some variants of the cakecutting problem (bogomolnaia2020guarantees; Elkind et al., 2021c, b, a).
The multiplicative approximation of MMS originated in the computer science literature (Procaccia and Wang, 2014). These algorithms guarantee that each agent receives at least an fraction of its maximin share threshold (Kurokawa et al., 2018; Amanatidis et al., 2017; Garg et al., 2018; Ghodsi et al., 2018). For goods, the best known existence result is , and the best known polynomialtime algorithm guarantees (Garg and Taki, 2020). The MMS bound was improved for special cases with only three agents (Amanatidis et al., 2017), and the best known approximation is (Gourvès and Monnot, 2019).
There are also MMS approximation algorithms for settings with constraints, such as when the goods are allocated on a cycle and each agent must get a connected bundle (Truszczynski and Lonc, 2020). McGlaughlin and Garg (2020) showed an algorithm for approximating the maximum Nash welfare (the product of agents’ utilities), which attains a fraction of the MMS. Recently, Nguyen et al. (2017) gave a Polynomial Time Approximation Scheme (PTAS) for a notion defined as optimalMMS, that is, the largest value, , for which each agent receives the value of . Since the number of possible partitions is finite, an optimalMMS allocation always exists, and it is an MMS allocation if . However, an optimalMMS allocation may provide an arbitrarily bad ordinal MMS guarantee Searns and Hosseini (2020); Hosseini and Searns (2021).
MMS for allocating chores
Aziz et al. (2017) initiated the study of MMS fairness for allocating indivisible chores. They proved that—similar to allocating goods—a outof MMS allocation may not always exist, and computing the MMS value for a single agent remains NPhard.
In the maximin share allocation of chores, the multiplicative approximation factor is larger than (each agent might get a larger set of chores than its MMS value). The multiplicative factors in the literature have been improved from 2 (Aziz et al., 2017) to 4/3 (Barman and Krishna Murthy, 2017) to 11/9 (Huang and Lu, 2021). The best known polynomialtime algorithm guarantees a 5/4 factor (Huang and Lu, 2021). AignerHorev and SegalHalevi (2022) prove the existence of a outof MMS allocation for chores, but their algorithm requires an exact computation of the MMS values, so it does not run in polynomial time. Note that multiplicative and ordinal approximations do not imply one another—each of them might be better in some instances as we illustrate in the next example.
Example 1.0.
Consider an instance with agents and identical chores of value . Then:

If there are chores, then the outof MMS is , which is better than of the outof MMS.

If there are chores, then the outof MMS is , which is worse than of the outof MMS.
In Appendix B we generalize this example to any number of agents. Additionally, we study the relationships between the ordinal maximin share and other common fairness notions such as approximateproportionality or approximateenvyfreeness. The bottom line is that all these notions are independent: none of them implies a meaningful approximation of the other.
The notion of maximin share fairness has been extended to asymmetric agents, i.e. agents with different entitlements over chores (Aziz et al., 2019b, c). Recently, a variation of MMS has also been studied in conjunction with strategyproofness that only elicits ordinal preferences as opposed to cardinal valuations (Aziz et al., 2019d, 2020a). In parallel, there are works studying other fairness notions for chores, or for combinations of goods and chores. Examples are approximate proportionality (Aziz et al., 2020b), approximate envyfreeness (Aziz et al., 2019a), approximate equitability (Freeman et al., 2020), and leximin (Chen and Liu, 2020). In the context of mixed items, however, no multiplicative approximation of MMS is guaranteed to exist (Kulkarni et al., 2021). In Appendix C we show that similarly no ordinal MMS approximation is guaranteed to exist for mixed items.
2. Preliminaries
Problem instance.
An instance of a fair division problem is denoted by where is a set of agents, is a set of indivisible chores, and is a valuation profile of agents. Agent ’s preferences over chores is specified by a valuation function . We assume that the valuation functions are additive; that is, for any agent , for each subset , where . We assume items are chores for all agents, i.e., for each , for every we have . For a single chore , we write instead of . Without loss of generality, we assume that since otherwise we can add dummy chores that are valued by all agents.
Allocation.
An allocation is an partition of the set of chores, , where a bundle of chores , possibly empty, is allocated to each agent . An allocation must be complete: .
Maximin share.
Let be an integer and denote the set of partitions of . For each agent , the outof Maximin Share of on , denoted , is defined as
where . Intuitively, this is the maximum value that can be guaranteed if agent partitions the items into bundles and chooses the least valued bundle. When it is clear from the context, we write or outof MMS to refer to .
Given an instance, we say that a outof MMS exists if there exists an allocation such that for every agent , . Note that and it is a weaklyincreasing function of : a larger value means that there are more agents to share the burden, so each agent potentially has fewer chores to do. Clearly, when chores can be partitioned into bundles of equal value. Moreover, is agent ’s proportional share.
Ordered instance.
An instance is ordered when all agents agree on the linear ordering of the items, irrespective of their valuations. Formally, is an ordered instance if there exists an ordering such that for all agents we have . Throughout this paper, we often refer to this as an ordering from the largest chores (least preferred) to the smallest chores (most preferred).
In the context of allocating goods, Bouveret and Lemaître (2016) introduced ordered instances as the ‘most challenging’ instances in achieving MMS, and showed that given an unordered instance, it is always possible to generate a corresponding ordered instance in polynomial time.^{1}^{1}1Bouveret and Lemaître (2016) called these sameorder preferences. More importantly, if an ordered instance admits an MMS allocation, the original instance also admits an MMS allocation which can be computed in polynomial time (see Example 2.2).
Lemma 2.0 (Barman and Krishna Murthy (2017)).
Let be an ordered instance constructed from the original instance . Given allocation on , a corresponding allocation on can be computed in polynomial time such that for all .
The above results hold for any MMS approximation without loss of generality, and have been adopted extensively in simplifying the MMS approximations of chores Huang and Lu (2021). Therefore, throughout the paper we only focus on ordered instances.
Example 2.0 (Ordering an instance).
Consider the following unordered instance with four chores and two agents:
3  6  8  6  
8  9  12  6 
An ordered instance is obtained by sorting the values in descending order of absolute values. It has two possible allocations marked by a circle and that satisfy MMS:
6  1  8  
8  4  12 
Any of the marked MMS allocations in the ordered instance corresponds to a pickingsequence that results in an MMS allocation in the original instance. A picking sequence lets agents select items from the ‘best chores’ (most preferred) to the ‘worst chores’ (least preferred).
For instance, applying a picking sequence 2, 1, 1, 2 (obtained from the circled allocation in the second table) to the original instance results in allocation (marked by circles in the first table) that guarantees MMS. Specifically, when applied to the original instance, agent 2 picks first, and takes its highest valued chore , which corresponds to . Agent 1 picks next. Since its best chore is available he picks it. The next pick also belongs to agent 1. But his secondbest chore is , which is already allocated to agent 2. Thus, agent 1 picks its nextbest available chore , and agent 2 is left with .
3. Valid Reductions for Chores
In this section, we first show that the valid reductions techniques that are typically used for allocating goods can no longer be applied to chores instances. While typical goods reductions fail in allocating chores, we then argue that some of the core ideas translate to chores allocation through careful adaptations. These techniques are of independent interest as they can be utilized in other heuristic algorithms (e.g. multiplicative MMS approximations).
3.1. Reductions for goods
Several algorithms that are developed to provide multiplicative MMS approximations rely on structural properties of MMS and heuristic techniques to avoid computational barriers of computing MMS thresholds. To understand common reduction techniques, we first take a detour to recall techniques that are valid when allocating goods. For the ease of exposition, we present this section with the standard definition of outof MMS.
Definition 3.1 (Valid Reduction for Goods).
Given an instance, and a positive integer , allocating a set of goods to an agent is a valid reduction if
(i) , and
(ii) .
Intuitively, a valid reduction ensures that the MMS values of the remaining agents in the reduced instance does not strictly decrease; otherwise, solving the reduced instance may violate the initial MMS values of agents.
Since computing MMS values is NPhard (Bouveret and Lemaître, 2016), one can instead utilize proportionality as a (loose) upper bound for MMS values. Given the proportionality bound, it is easy to see that for each agent , . Therefore, any good with a value for agent can be assigned to agent , satisfying ’s MMS value, without violating conditions of valid reductions. The next lemma (due to Garg et al. (2018)) formalizes this observation and provides two simple reduction techniques.
Lemma 3.0 (Garg et al. (2018)).
Given an ordered goods instance with , if , then allocating to agent (and removing them from the instance) forms a valid reduction. Similarly, allocating to agent forms a valid reduction if .
The following example illustrates how valid reductions can be iteratively applied to reduce an ordered instance.
Example 3.0 (Valid reductions for goods).
Consider five goods and three agents with valuations as shown in the table below.
9  6  1  7  
8  6  2  8  
8  5  3  1  8 
The MMS values of all three agents are shown in the table. Suppose is allocated to agent . This allocation is a valid reduction because . After this reduction, the MMS values for the remaining agents are and respectively. At this point, the set can be given to agent as a valid reduction since and are precisely th and th highest valued goods according to in the reduced instance (note that after the removal of ).
Remark 3.0.
When allocating goods, valid reduction techniques are often used together with scaling of an instance to simplify the approximation algorithms Garg and Taki (2020); Garg et al. (2018). The scale invariance property of MMS (Ghodsi et al., 2018) states that if an agent’s valuations are scaled by a factor, then its MMS value scales by the same factor. Formally, given an instance , for every agent with a proportionality bound we can construct a new instance such that and for every , . Using the proportionality bound for scaling an instance implies that allocating any set such that to agent forms a valid reduction.
The scale invariance property of MMS and reduction techniques circumvent the exact computation of MMS thresholds, which enables greedy approximation algorithms for allocating goods. Garg et al. (2018) developed a simple greedy algorithm that guarantees to each agent of its MMS value; later algorithms improved this approximation to (Garg and Taki, 2020; Ghodsi et al., 2018).
3.2. Failure of Goods Reductions
We briefly discuss how the valid reductions for goods do not translate to instances with chores. The reason is that the reductions for goods rely upon the fact that, redistributing items from one bundle of a partition to other bundles weakly increases the value of other bundles. However, in the context of chores, this assumption does not hold as we illustrate next.
Example 3.0.
Consider three agents and six chores. Agents’ valuations are identical such that each agent values each chore as . The 1outof MMS of all agents is , i.e. for every . A reduction that allocates a single chore (e.g. largest chore), say , satisfies agent 1 since . However, this reduction is not valid since the MMS value of the remaining agents decreases, that is, for .
To illustrate why reductions of larger bundles such as fail, we provide the following example that generalizes this reduction to bundles with larger sizes.
Example 3.0.
Consider an instance with three agents and chores that are each valued . Each agent’s MMS value is . Take any bundle of chores. Any agent would agree to receive , as . However, allocating the bundle to agent is not a valid reduction. This is because the remaining chores must be allocated among the remaining two agents, but which is less than .
Notice that smaller bundles of do satisfy agent as well but still result in decrease of MMS values for other agents. For example, when , if are allocated to an agent, the MMS values of the remaining agents decrease from to .
3.3. Estimating the Number of Large Chores
One of the key distinctions between allocating goods and chores is the tolerance of bounds used for approximating MMS values. As we discussed previously, proportionality provides a reasonable upper bound in allocating goods through reductions: as soon as the value of a bundle reaches an agent’s proportionality threshold, a reduction can be applied without including any additional item.
In contrast, when allocating chores, proportionality may be a loose bound: when selecting a set of chores that satisfies proportionality for an agent, it may still be necessary to include additional chores to ensure that no chore remains unallocated.
Example 3.0.
Consider an instance with 10 chores and 10 agents with identical valuations: three small chores valued at , six medium chores valued at , and one large chore valued at . The proportionality threshold is but the MMS is . Once an agent reaches the proportionality threshold, say by receiving a single medium chore, it could still receive an additional medium or small chore.
The main challenge is how to pack as many chores as possible within a bundle without violating the maximin share threshold.
We start by making a simple assumption on the size of the instance. For any instance, without loss of generality, we can always add dummy chores with value and assume that .^{2}^{2}2In Appendix D we show that this assumption is valid without adding dummy chores.
Our first lemma will be used to bound the number of large chores in each bundle. It states that in an ordered chores instance, the most preferred chores from the set of the least preferred chores are valued at least as much as outof MMS share.
Lemma 3.0.
Let be an ordered chores instance, and and be nonnegative integers such that . Then, for each agent ,
Proof.
Consider the subset of chores . By definition, for every chore , , thus we have . By the pigeonhole principle, since , any partition of into bundles must contain at least one bundle, say , which contains at least chores. By definition, we have .
Let the set contain the last (most preferred) chores of . most preferred chores of . Since chores are ordered from the least to the most preferred chores, this is weakly preferred to . Thus, where . By transitivity, . ∎
Lemma 3.8 links the number of chores to their values, and enables us to identify the number of large (least preferred) chores.
Corollary 3.0.
Given an ordered chores instance , and an integer , the following statements hold^{3}^{3}3If , we may add dummy chores with value to all agents.:

, for all ;

;

.
4. outof Maximin Share for Chores in Polynomial Time
In this section, we present a polynomialtime algorithm for allocating chores that achieves outof MMS. The algorithm takes a chores instance along with a set of thresholds for agents as an input and utilizes a greedy “bagfilling” procedure to assign bundles of chores to agents. The highlevel idea behind the algorithm is allocating the large (least desirable) chores first and packing as many chores as possible into a bundle up to the given threshold. The algorithmic idea is simple. The key in achieving outof MMS approximation is selecting appropriate threshold values.
Algorithm description.
The underlying structure of Algorithm 1 is similar to the FirstFitDecreasing algorithm for binpacking (Johnson, 1973).^{4}^{4}4 The same algorithm is used by Huang and Lu (2021) for achieving multiplicative approximations of MMS. They prove that, with appropriate thresholds, Algorithm 1 guarantees every agent at least of its MMS value. This does not directly imply any result for ordinal approximation as shown in Example 1.1. It starts by selecting an empty bundle and adding a large (lowest value) chore to the bag. While the value of the bag is above a threshold for at least one agent, add an additional chore—in order of the largest to smallest—to the bundle. If a chore cannot be added, the algorithm skips it and considers the nextsmallest (more preferred) chores. Each agent has a different threshold, , and assesses the bundle based on this threshold. When no more chores can be added, the bundle is allocated to an arbitrary agent who still finds it acceptable. The algorithm repeats with the remaining agents and chores.
For any selection of nonpositive thresholds , Algorithm 1 guarantees that 1) every bundle is allocated to an agent who values it at least , and 2) every agent receives a bundle (possibly an empty bundle). However, if the thresholds are too optimistic (too close to zero), the algorithm may result in a partial allocation, i.e., some chores might remain unallocated. The main challenge is to carefully choose the threshold values such that the algorithm will provably terminate with a complete allocation.^{5}^{5}5In contrast, when allocating goods, all goods are allocated, and the challenge is showing that all agents receive a bundle of certain threshold.
Theorem 4.1.
Given an additive chores instance, a outof MMS allocation exists and can be computed in polynomial time.
Proof.
Let be an ordered instance and . Without loss of generality, we can assume that by adding dummy chores with value 0 for all agents.
For each agent, let the thresholds be selected as follows:
Corollary 3.9 and the inequality imply that all agents receive their outof MMS, that is, .
In order to show that all chores are allocated, we split the chores into three categories of large (), medium ), and small chores.
Since for all , , every single chore can be added to an empty bag. Consider the first bundles. Since these bundles contain at least one chore each, and , the large chores are allocated within the first iterations.
Similarly, since , the medium chores may be bundled in pairs from largest to smallest and form the next bundles. This implies that, within the first allocated bundles, all large and medium chores are allocated. Importantly,
Thus, we conclude that all large and medium chores are allocated upon the termination of the algorithm.
The last step is to prove that all small chores are allocated too. These chores are added to bundles whenever there is additional gap between and . Consider the last agent, , who receives a bundle before Algorithm 1 terminates. If no small chores remain before agent receives a bundle, then we are done.
Suppose that there is some remaining small chore before agent receives a bundle. For each other bundle already allocated, necessarily , because otherwise agent would have accepted and chore would have been added to . Now, since and the instance is ordered, we have that . In turn, this implies that for each .
By the way we selected the thresholds, we have that . We use this fact to upper bound the amount of value in each previously allocated bundle:
which implies that
By replacing the value of , we have
Therefore,
This inequality implies that before the last bundle is initialized, agent values the remaining items at least . Thus, agent can take all the remaining chores. ∎
Remark 4.0.
Interestingly, for goods, outof MMS approximations exist Hosseini and Searns (2021) and can be computed in polynomial time Hosseini et al. (2021). However, the techniques used for proving the existence results as well as developing a tractable algorithm are substantially different due to reductions available for goods (as discussed in Section 3) as well as challenges posed by packing bundles as much as possible to ensure complete allocations of chores. On the other hand, in the case of goods even a slight error in computing MMS values may result in wasting values and not having sufficient goods to satisfy some agents (see (Hosseini and Searns, 2021)
for an example) whereas for chores we can tolerate an estimate of MMS values as long as all chores are allocated.
5. outof MMS Allocations Exist for Chores
In this section, we show that a careful selection of threshold values in Algorithm 1, in fact, guarantees outof MMS approximation. To achieve this result we require a precise computation of MMS values for each agent, which in turn is intractable (Bouveret and Lemaître, 2016). Nonetheless, we prove the existence of outof MMS, and later in Section 6 provide a polynomialtime algorithm that achieves an approximation of this bound.
Theorem 5.1.
Given an additive chores instance, a outof MMS allocation is guaranteed to exist.
Theorem 5.1 is an immediate corollary of Lemma 5.2 below. For the ease of exposition, we first provide the proof of the theorem.
Proof.
By construction, Algorithm 1 terminates and every agent receives a bundle (possibly empty) with the value of at least . By Lemma 5.2, we can pick for each agent the threshold where , and all chores will be allocated. Thus, we have a complete allocation in which each agent’s value is at least outof MMS, which proves Theorem 5.1. ∎
Lemma 5.0.
Suppose Algorithm 1 is executed with threshold values for all . Then all chores are allocated upon termination of the algorithm.
Proof.
Let be an ordered chores instance. For simplicity, we start by scaling the valuations such that for each agent , .^{6}^{6}6This scaling step is only used to simplify the proof. An identical result can be achieved without scaling the valuations by setting all thresholds to where and updating the rest of the values in the proof accordingly. This implies that
(1) 
and for each agent .
Let agent be the last agent who received a bundle (in the th iteration). The proof proceeds by considering two types of remaining chores according to their value: 1) small chores with value , and 2) large chores with value .
Case 1: small chores. Suppose for contradiction that there is some chore such that that remains unallocated at the end of the algorithm. By assumption, agent could not add to any allocated bundle, including ’s own bundle. Since is the last agent, we infer that for each agent with bundle , . By additivity, because , we can write for all . Summing over all assigned bundles gives , which contradicts (1). Therefore, no such small chore remains at the end of the algorithm.
Case 2: large chores. Suppose that there is some chore such that that remains unallocated at the end of the algorithm. We define the following sets of bundles.

are MMS bundles — bundles that comprise a partition of agent .

are algorithm bundles — bundles allocated by Algorithm 1. denotes the bundle allocated at iteration .
For each MMS bundle , let denote the th largest chore (least valued) of . Whenever , we define . Without loss of generality, we assume that the MMS bundles are sorted such that . Since valuations are scaled so that , there are at most large chores (with value less than ) in each MMS bundle.
For the sake of the proof, we maintain a vector of
shadowbundles , which is initialized as follows:
For each , the set of large chores (with value less than to ) in .

For each , .
At each iteration of the algorithm, we edit the vector of shadowbundles by moving some chores between bundles. We do so such that, at the start of iteration , the following invariants hold:

for all . That is, each chore in the shadowbundles is allocated.

and for . That is, each remaining shadowbundle has value at least .
Both invariants hold before the first iteration (): invariant (1) holds vacuously, and invariant (2) holds since each bundle is contained in one of ’s MMS bundles.
Suppose the invariants hold before iteration . We show how to edit the shadowbundles such that the invariants still hold before iteration .
We reorder the shadowbundles so that is the largest remaining chore. Hence, in iteration , Algorithm 1 selects this chore first to add to the bag. That is, . We split to cases based on the size of , which must be in by invariant (2).
If , then both invariants hold at , since , and the shadowbundles do not change.
If , then we have to handle . By invariant (2) we have . This means that can potentially be inserted as the second chore in . If indeed , then we are done — both invariants hold at , since , and the shadowbundles do not change. If , this means that Algorithm 1 processed chore before chore . Since the algorithm processes jobs by ascending order of values (descending order of absolute values), this implies that . Now, we find the chore in some shadowbundle for some , and swap it with . We claim that both invariants still hold:

, since after the swap and , and .

The remaining shadow bundles remained as before, except for the shadowbundle , in which a single chore was swapped. But, because , the value of weakly increases, so it is still at least .
Finally, suppose . We handle as in the previous case, so that now and . It remains to handle . Because is the smallest chore in , and , by the pigeonhole principle we must have . We move chore to a bundle which was initially empty and which contains fewer than chores (all of which were moved to the bundle this way and thus have value at least ). Such a bundle can always be found because at most one chore is moved this way in each iteration, and there are at most bundles which were initially nonempty. Thus an upper bound on the number of bundles filled this way is: . Since each chore moved this way has value at least , we preserve invariant (2) and . After the move, contains only two chores, both of which are in , so invariant (1) holds too.
We note that if the first chore is selected from one of these growing bundles, then because this chore has value at least and because chores are only moved if , no more chores will be moved in later iterations.
The final step in proving the lemma is to move all chores from to . This step is necessary in order to guarantee that the largest remaining chore in later steps is not from (and thus ).^{7}^{7}7For example, consider and . It is possible that , which means that but . We may do this because it preserves . Notice that for the agent who received bundle ; however, we do not require that , as agent is not be allocated the bundle . Observe that the chores correspond to additional large chores which could be added to the bundle , and thus, in moving these chores, the value of bundles for can only weakly increase and will remain at least .
Lastly, invariant (2) implies that after iteration , has value at least for agent . All remaining large chores lie in this bundle. Thus agent may take all such large chores. This implies that and that no large chores remain when the algorithm terminates. ∎
We do not know whether the factor is tight in general. The following proposition shows a nontight upper bound on the performance of Algorithm 1 for large values of .
Proposition 5.0 (Upper bound for Algorithm 1).
For any integer , there is an instance with agents in which Algorithm 1 cannot guarantee to each agent its outof MMS.
Proof.
When all agents have the same valuation and the same threshold, Algorithm 1 reduces to an algorithm for binpacking known as First Fit Decreasing (FFD) (Johnson, 1973; Baker, 1985). FFD sorts the chores by descending value, and allocates each chore to the first (smallestindex) agent who can take it without going over the threshold. Algorithm 1 (with identical valuations and thresholds) does exactly the same, only in a different order: instead of making a single pass over all the chores and filling all bins simultaneously, it makes passes over the chores, and fills each bin in turn with the chores that would be inserted to it in that single pass.
Dósa (2007) and Dósa et al. (2013) have shown that, for every integer , there is a bin packing instance in which the optimal packing needs bins but FFD needs bins. We construct a chore allocation instance with agents with identical valuations, taken from that binpacking instance. Assume that the agents’ thresholds are at least their outof MMS. Then, after Algorithm 1 allocates bundles to all agents, some chores may remain unallocated. ∎
Consider Proposition 5.3 with and . By Theorem 5.1, our algorithm achieves ordinal approximation. This bound is tight since we cannot guarantee to all agents their 1outof MMS. We present this tight example below.
Example 5.0 (A tight example for Algorithm 1).
Consider an instance with agents and chores valued as follows for all agents: four chores valued at , four chores valued at , four chores valued at , and eight chores valued at . For each agent, the outof MMS partition contains the following bundles with the MMS value of :

bundles of chores with values ;

bundles of chores with values .
With the threshold values set as 400, Algorithm 1 generates the following bundles:

4 bundles with chores ;

1 bundle with chores ;

1 bundle with chores ;

1 bundle with chores .
After allocating these 7 bundles, a chore with the value of remains unallocated and cannot be added to any of the above bundles since it would violate the threshold of .
6. Polynomialtime Approximations
In this section, we develop an efficient approximation algorithm that achieves outof MMS for any chores instance. We rely on Algorithm 1 while utilizing an efficient approximation algorithm to find reasonable threshold values.
This result provides an interesting computational contrast between multiplicative and ordinal approximations of MMS for allocating chores: multiplicative approximations require exact MMS values, which can be seen as a job scheduling problem where the goal is to minimize the makespan (the maximum completion time of a machine). However, ordinal MMS approximation on chore instances can be modeled as a combinatorial problem of bin packing (see Korte and Vygen (2018) for a detailed survey) where the goal is to minimize the number of bins subject to an upper bound on the total size of items in each bin.
While both problems are NPhard, they differ in the approximation algorithms available for them. The job scheduling problem has polynomialtime approximation schemes (PTAS) (Woeginger, 1997), but their runtime is exponential in the approximation accuracy . On the other hand, the bin packing problem used for our ordinal MMS approximation admits additive approximation algorithms.
In particular, we use an algorithm by Hoberg and Rothvoss (2017), which we call Algorithm HR. Algorithm HR takes as input a binpacking instance , and returns a packing with at most bins (for some fixed constant ) in time polynomial in (the number of input numbers in ), where denotes the smallest possible number of bins for . We combine Algorithm HR with binary search on the bin size.^{8}^{8}8Similar search techniques have been used for MultiFit scheduling algorithms (Coffman et al., 1978) and the dual approximation scheme of Hochbaum and Shmoys (1987).
To efficiently apply binary search, we assume in this section that the values of chores are negative integers with a bounded binary representation. The runtime of our algorithm will be polynomial in the size of the binary representation of the input.
Lemma 6.0.
Given an additive chores instance with integer values, for any integer and agent , it is possible to compute a number for which
in time polynomial in the size of binary representation of the input.
Proof.
We start by applying Algorithm 2. The algorithm converts the chores allocation instance to a binpacking instance, where each chore is converted to an input of size . Then it applies binary search with lower bound and upper bound . Throughout the search, the following invariants are maintained:

;

Algorithm HR with binsize needs at most bins;

Algorithm HR with binsize needs more than bins.
The invariants are obviously true at initialization, and they are maintained by the way and are updated. Let be the returned value, that is, the value of once the algorithm terminates. By the termination condition, at this point .
Invariant (2) implies that there exists a partition of chores into bins, in which the total absolute value of each bin is at most , so the total value is at least . Therefore, .
Invariant (3) implies that there is no partition of the chores into or fewer bins, in which the total absolute value of all bins is at most —otherwise the HR algorithm could have filled at most bins of size . Therefore, . Since we assumed that all chores’ values are integers, this implies .
The binary search uses iterations, which is polynomial in the size of the binary representation of the input. Each iteration runs the HR algorithm, whose runtime is polynomial in . This concludes the proof of the lemma. ∎
Theorem 6.2.
Given an additive chores instance with integer values, it is possible to find in polynomial time, for some fixed positive constant , a outof