1 Introduction
Formally, the Santa Claus problem takes as input a set of children, a set of gifts, and values for all and . In other words, a child is only interested in a particular subset of the gifts, but then its value only depends on the gift itself. The goal is to find an assignment of gifts to children so that is maximized.
The first major progress on this problem is due to Bansal and Sviridenko [BS06], who showed a approximation based on rounding a configuration LP. The authors of [BS06] also realized that in order to obtain a approximation, it suffices to answer a purely combinatorial problem: show that in a uniform bipartite hypergraph with equal degrees on all sides, there is a leftperfect matching that selects a constant fraction of nodes from original edges. This question was affirmatively answered by Feige [Fei08] who proved a large unspecified constant using the Lovász Local Lemma repeatedly. Then Asadpour, Feige and Saberi [AFS08] showed that one can answer the question of [BS06] by using a beautiful theorem on hypergraph matchings due to Haxell [Hax95]; their bound^{1}^{1}1Note that the conference version of [AFS08] provides a factor of 5, which in the journal version [AFS12] has been improved to 4. of 4 has been slightly improved to 3.84 by Jansen and Rohwedder [JR18c] and Cheng and Mao [CM18a]. Recently, Jansen and Rohwedder [JR18a]
also showed (still nonconstructively) that it suffices to compare to a linear program with as few as
many variables and constraints, in contrast to the exponential size configuration LP.A hypergraph is called bipartite if for all hyperedges . A (left) perfect matching is a set of hyperedges that are disjoint but cover each node in . In general, finding perfect matchings in even bipartite hypergraphs is hard, but there is an intriguing sufficient condition:
Theorem 1 (Haxell [Hax95]).
Let be a bipartite hypergraph with for all . Then either contains a leftperfect matching or there is a subset and a subset so that all hyperedges incident to intersect and .
It is instructive to consider a “standard” bipartite graph with . In this case, if there is no perfect matching, then there is a set with at most many neighbors — so Haxell’s condition generalizes Hall’s Theorem. Unlike Hall’s Theorem, Haxell’s proof is nonconstructive and based on a possibly exponential time augmentation argument. Only very recently and with a lot of care, Annamalai [Ann16] managed to make the argument polynomial. This was accomplished by introducing some slack into the condition and assuming the parameter is a constant. Preceding [Ann16], Annamalai, Kalaitzis and Svensson [AKS15] gave a nontrivially modified version of Haxell’s argument for Santa Claus, which runs in polynomial time and gives a approximation^{2}^{2}2To be precise they obtain a approximation in time .. Recently, Cheng and Mao altered their algorithm to improve the approximation to , for any constant [CM18b]. Our algorithm will also borrow a lot from [AKS15]. However, through a much cleaner argument we obtain a result that works in a more general matroid setting and implies a better approximation of for Santa Claus.
It should not go without mention that the version of the Santa Claus problem with arbitrary has also been studied before under the name MaxMin Fair Allocation. Interestingly, the integrality gap of the configuration LP is at least [BS06]. Still, Chakrabarty, Chuzhoy and Khanna [CCK09] found a (rather complicated) approximation algorithm in time^{3}^{3}3The factor is if only polynomial time is allowed, where is arbitrary but fixed..
Santa Claus has a very well studied “dual” minmax problem. Usually it is phrased as Makespan Scheduling with machines and jobs . Then we have a running time of job on machine , and the goal is to assign jobs to machines so that the maximum load of any machine is minimized. In this general setting, the seminal algorithm of Lenstra, Shmoys and Tardos [LST87] gives a 2approximation — with no further improvement since then. In fact, a approximation is hard [LST87], and the configuration LP has an integrality gap of 2 [VW11]. In the restricted assignment setting with , the breakthrough of Svensson [Sve11] provides a nonconstructive bound on the integrality gap of the configuration LP using a customtailored Haxelltype search method. Recently, this was improved by Jansen and Rohwedder [JR17] to . In an even more restricted variant called Graph Balancing, each job is admissable on exactly 2 machines. In this setting Ebenlendr, Krcál and Sgall [EKS08] gave a 1.75approximation based on an LProunding approach, which has again been improved by Jansen and Rohwedder [JR18b] to 1.749 using a local search argument.
1.1 Our contributions
Let be a matroid with groundset and a family of independent sets . Recall that a matroid is characterized by three properties:
Nonemptyness: ;
Monotonicity: For and one has ;
Exchange property: For all with there is an element so that . The bases of the matroid are all inclusionwise maximal independent sets. The cardinalities of all bases are identical, with size denoted as . The convex hull of all bases is called the base polytope, that is , where is the
characteristic vector
of .Now consider a bipartite graph with the ground set on one side and a set of resources on the other side; each resource has a size . In a problem that we call Matroid MaxMin Allocation, the goal is to find a basis and an assignment with so that is maximized. To the best of our knowledge, this problem has not been studied before. In particular if is the target objective function value, then we can define a linear programming relaxation as the set of vectors satisfying the constraints
Here, the decision variable expresses whether element should be part of the basis, and expresses whether resource should be assigned to element . We abbreviate as the neighborhood of and is shorthand for . Then our main technical result is:
Theorem 2.
Suppose . Then for any one can find
with both and integral in time , where . This assumes that membership in the matroid can be tested in time polynomial in .
Previously this result was not even known with nonconstructive methods. We see that Matroid MaxMin Allocation is a useful framework by applying it to the Santa Claus problem:
Theorem 3.
The Santa Claus problem admits a approximation algorithm in time .
For a suitable threshold , call a gift small if and large otherwise. Then the family of sets of children that can get assigned large gifts forms a matchable set matroid. We apply Theorem 2 to the comatroid of the matchable set matroid. Then we obtain a basis , which contains the children not receiving a large gift. These children can receive small gifts of total value . The remaining children receive a large gift with value at least . Setting implies the claim. Note the approximation factor will be with respect to a natural, compact linear program with many variables and constraints. The smallest LP that was previously known to have a constant integrality gap was the size LP of [JR18a].
2 An algorithm for Matroid MaxMin Allocation
In this section we provide an algorithm that proves Theorem 2.
2.1 Intuition for the algorithm
We provide some insight by starting with an informal overview of our algorithm. Let be the bipartite graph defined in Section 1.1. If and with for all , we can consider the pair to be a hyperedge. Then for and val the function summing the value in a hyperedge’s resources, we say that is a edge if it a hyperedge with minimal (inclusion wise) resources such that val. By we denote the set of edges.
Fix constants and , to be chosen later. The goal of the algorithm is to find a basis and a hypergraph matching covering . The algorithm is initialized with , for any node , and . We perform rank() many phases, where in each phase we find a larger matching, and the set it covers in is independent with respect to the matroid. In an intermediate phase, we begin with and a hypergraph matching covering with one exposed node . At the end of a phase, the algorithm produces an updated matching covering an independent set , with . For , there exists such that . Repeating this times, we end with a basis which is wellcovered by edges.
The algorithm generalizes the notion of an augmenting path used to find a maximum matchings in bipartite graphs to an augmenting tree. Though instead of swapping every other edge in an augmenting path, as is the case for a bipartite graph, the algorithm swaps sets of edges in the augmenting tree to find more space in the hypergraph. During a phase, the edges are swapped in such a way that the underlying set in covered by the matching is always independent with respect to the matroid. The edges which are candidates for being swapped into the matching are called adding edges and denoted by , while those which are candidates for being swapped out of the matching are called blocking edges and denoted by . It is helpful to discuss the nodes covered by adding and blocking edges in each part, and so for hyperedges we define and as the nodes covered by in and , respectively. The algorithm gives some slack by allowing the adding edges to be slightly larger than the blocking edges.
The parameters and determine the value of the adding and blocking edges, respectively, so the adding edges are a subset of while the blocking edges are a subset of . Set , so that all elements in the basis receive resources with value at most . The following observations follow from minimality of the hyperedges:

A edge has value less than . This implies that an add edge has value less than and a blocking edge has value less than .

Every blocking edge has value at most not covered by an add edge.
To build the augmenting tree, the algorithm starts from the node in uncovered by , , and chooses an edge covering which is added to . If there is a large enough hyperedge such that and is disjoint from , then there is enough available resources that we simply update by adding to it. Otherwise, does not contain a set of resources with total value free from . The edges of intersecting are added to the set of blocking edges, . Nodes in are called discovered nodes, as they are the nodes covered by the hypermatching which appear in the augmenting tree.
Continuing to build the augmenting tree in later iterations, the algorithm uses an Expansion Lemma to find a large set of disjoint hyperedges, , that cover a subset which can be swapped into in place of some subset of while maintaining independence in the matroid. The set of hyperedges either intersects many edges of or has a constant fraction of edges which contain a hyperedge from that is disjoint from .
In the first case, a subset of which intersects , denoted , is added to , and the edges of intersecting , denoted , are added to , for the index of the iteration. Note we naturally obtain layers which partition the adding and blocking edges in our augmenting tree. The layers for the adding and blocking edges respectively are denoted as and , with
The layer indices are tracked because they are useful in proving the algorithm’s runtime. In the second case, for the set of edges that have a hyperedge from disjoint from , the algorithm finds a layer which has a large number of discovered nodes that can be swapped out for a subset of nodes which covers.
2.2 A detailed procedure
Recall, we fixed . Then, we set and , for . Here lies the subtle but crucial difference to previous work. In [AKS15] the authors have to use adding edges that are a large constant factor bigger than blocking edges. In our setup we can allow adding edges that are only marginally larger than the blocking edges. This results in an improved approximation factor of for Santa Claus compared to the factor by [AKS15].
The algorithm is described in Figure 1. For later reference, the constant from Lemma 7 is , and the constant from Lemma 8 is . We use Lemma 9, with constant . Our bounds for constants do not use a specific choice of , and instead they only use the fact that . Both cases in the algorithm are visualized in Figure 3 and Figure 3.
2.3 Correctness of the algorithm
Here, we prove several lemmas used in the algorithm which implies Theorem 2. We begin by building up to our Expansion Lemma, Lemma 7. Our algorithm takes a fixed independent set, , and swaps out of for a set of nodes in order to construct a new independent set of the same size. This is possible by Lemma 7.
Recall a variant of the socalled Exchange Lemma. For independent sets , let denote the bipartite graph on parts and (if , then have one copy of the intersection on the left and one on the right). For and we insert an edge in if . Otherwise, for , there is an edge between the left and right copies of , and this is the only edge for both copies of .
Lemma 4 (Exchange Lemma).
For any matroid and independent set with , the exchange graph contains a left perfect matching.
Next, we prove several lemmas about vectors in the base polytope with respect to sets containing swappable elements. Lemma 7 relies on a Swapping Lemma, Lemma 6, for which the next lemma serves as a helper function.
Lemma 5 (Weak Swapping Lemma).
Let be a matroid with an independent set . For , define
Then for any vector in the base polytope one has .
Proof.
Note that in particular . Moreover, an equivalent definition of is
Due to the integrality of the base polytope, there is a basis with , where is the characteristic vector of . As and are independent sets with , from Lemma 4 there is a leftperfect matching in the exchange graph . The neighborhood of in is . As there is a leftperfect matching, is least and hence . ∎
Next, we derive a more general form of the Swapping Lemma (which coincides with the previous Lemma 5 if ):
Lemma 6 (Strong Swapping Lemma).
Let be a matroid with an independent set . Let and with and . Define
Then for any vector in the base polytope one has .
Proof.
Having proved our swapping lemma, we are equipped to prove the Expansion Lemma. Note that in our algorithm, layers are built to ensure that . Due to this and the minimality of the edges in and , has .
Lemma 7 (Expansion Lemma).
Let , with . Further, let and assume that there exists . Then there is a set of size covered by a matching so that and .
Proof.
Note that may contain elements from . Greedily choose and the matching with one node/edge after the other. Suppose the greedy procedure gets stuck — no edge can be added without intersecting . For the sake of contradiction assume this happens when . First, let
be the nodes which could be added to while preserving independence. Then for our fixed , by Lemma 6 one has
Let be the right hand side resources that are being covered by the augmenting tree. Here, we let . Using the minimality of the adding and blocking edges,
By the assumption that the greedy procedure is stuck, there is no edge with and . If denotes the neighborhood of in the bipartite graph , then this means that val for all . For every fixed we can then lower bound the weight going into as
Then double counting the weight running between and with a lower and upper bound shows that
Simplifying the above,
Thus we reach a contradiction for our choice of . ∎
The algorithm relies on the fact that from the set of hyperedges, , guaranteed by the Expansion Lemma, there is either some constant fraction of to swap into the matching, or a constant fraction of is blocked by edges in the current matching. In the former, significant space is found in for . In the latter, enough edges of the matching are intersected to guarantee the next layer in the augmenting tree is large. The following lemma proves at least one of these conditions occurs.
Lemma 8.
Set . Let and both be hypergraph matchings. Further, let
be the edges in that still have value after overlap with is removed. Then either (i) or (ii) intersects at least edges of .
Proof.
Let be the right hand side nodes where the hypermatchings overlap and suppose for the sake of contradiction that neither of the two cases occur. Then double counting the value of gives
Rearranging and simplifying, the above implies . Thus we contradict our choice of . ∎
Our last lemma will show that a constant fraction of the nodes which could be swapped out of the augmenting tree come from the same layer in the tree. This allows us to swap out enough nodes from the same layer to make substantial progress with each iteration. Here and are labelled the same as in the algorithm.
Lemma 9.
Let sets and be such that . Further, suppose there exists constant such that and for . Then, there exists a layer and constant , such that has size .
Proof.
By induction, can be written in terms of lower indexed sets as
for . Therefore, the size of can be written as . As is a constant, take large enough so , namely . Then the collection of sets for contain at least half of , so one of them must contain at least of . ∎
2.4 Termination and runtime
As seen in Lemma 9,
and solving for shows . Thus the total number of layers at any step in the algorithm is . Note after each collapse of the layers, the matching and possibly the independent set are updated. However, the fixed exposed node will remain in until the very last iteration in which the algorithm finds an edge that augments the matching. Before we begin discussing the proof guaranteeing our algorithm terminates, we need a lemma to compare the number of blocking edges after a layer is collapsed to the number of blocking edges at the beginning of the iteration.
Lemma 10.
Let be the index of the collapsed layer and let be the updated blocking edges after a collapse step. Then, .
Proof.
To prove the algorithm terminates in polynomial time, we consider a signature vector , where for . The signature vector and proof that the algorithm terminates is inspired by [AKS15], but it is subtly different.
Lemma 11.
The signature vector decreases lexicographically after each iterative loop in the algorithm.
Proof.
Let be a signature vector at the beginning of a step in the algorithm, and let be the result of through one iteration of the algorithm. For denoting the newest built layer in the algorithm, if the newest set of hyperedges found intersects at least many edges of , then another layer in the augmenting tree is built and no layer is collapsed. Then is lexicographically smaller than .
Otherwise, layer is collapsed. All finite coordinates above are deleted from the signature vector, and all coordinates before are unaffected. So it suffices to check that . Again, let be the updated blocking edges after a collapse step. As is the only set of blocking edges in affected by the collapse, by Lemma 10 one has . Taking a we compare the coordinates
∎
Choose the infinite coordinate to be some integer larger than . Since for every layer , we have , then every coordinate of the signature vector is upper bounded by . Recall the number of layers, and thus the number of coordinates in the signature vector, is also upper bounded by . Together, these imply that the sum of the coordinates of the signature vector is at most .
As the signature vector has nondecreasing order, each signature vector corresponds to a partition of an integer . On the other hand, every partition of some has a corresponding signature vector. Thus we apply a result of Hardy and Ramanujan to find the total number of signature vectors is . Since each iteration of the algorithm can be done in polynomial time and the signature vector decreases lexicographically after each iteration, the algorithm terminates after a total time of .
3 Application to Santa Claus
In this section, we show a polynomial time approximation algorithm for the Santa Claus problem. Recall that for a given set of children , and a set of presents , the Santa Claus problem asks how Santa should distribute presents to children in order to maximize the minimum happiness of any child^{4}^{4}4We assume Santa to be an equitable man– not one influenced by bribery, social status, etc.. Here, present is only wanted by some subset of children that we denote by , and present has value to child . The happiness of child is the sum of all for presents assigned to child . We assume w.l.o.g. to know the integral objective function value of the optimum solution, otherwise can be found by binary search.
We partition gifts into two sets: large gifts and small gifts , for parameters such that all gifts have values in . Let be the set of vectors satisfying
If , then this LP has many variables and many constraints. To see that this is indeed a relaxation, take any feasible assignment with for all . Now let be a modified assignment where we set for gifts that we decide to drop. For each child that receives at least one large gift we drop all small gifts and all but one large gift. Then a feasible solution is obtained by letting
We will show that given a feasible solution , there exists a feasible solution to . To do this, we will exploit tw