 # Efficient Splitting of Measures and Necklaces

We provide approximation algorithms for two problems, known as NECKLACE SPLITTING and ϵ-CONSENSUS SPLITTING. In the problem ϵ-CONSENSUS SPLITTING, there are n non-atomic probability measures on the interval [0, 1] and k agents. The goal is to divide the interval, via at most n (k-1) cuts, into pieces and distribute them to the k agents in an approximately equitable way, so that the discrepancy between the shares of any two agents, according to each measure, is at most 2 ϵ / k. It is known that this is possible even for ϵ = 0. NECKLACE SPLITTING is a discrete version of ϵ-CONSENSUS SPLITTING. For k = 2 and some absolute positive constant ϵ, both of these problems are PPAD-hard. We consider two types of approximation. The first provides every agent a positive amount of measure of each type under the constraint of making at most n (k - 1) cuts. The second obtains an approximately equitable split with as few cuts as possible. Apart from the offline model, we consider the online model as well, where the interval (or necklace) is presented as a stream, and decisions about cutting and distributing must be made on the spot. For the first type of approximation, we describe an efficient algorithm that gives every agent at least 1/nk of each measure and works even online. For the second type of approximation, we provide an efficient online algorithm that makes poly(n, k, ϵ) cuts and an offline algorithm making O(nk logk/ϵ) cuts. We also establish lower bounds for the number of cuts required in the online model for both problems even for k=2 agents, showing that the number of cuts in our online algorithm is optimal up to a logarithmic factor.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

### 1.1 The problems

The -Consensus Splitting problem deals with a fair partition of an interval among agents, according to measures. Necklace Splitting is a discrete version of the problem where the objective is to cut a necklace with beads of colors into intervals and distribute them to agents in an equitable way. Both problems can be solved using at most cuts, as shown in . The proofs apply topological arguments and are non-constructive. See also  and  for two and three-dimensional versions of the results. Known hardness results discussed in subsection 1.2, have been proved for the original versions of these two problems. These suggest pursuing the challenge of finding efficient approximation algorithms, as well as that of proving non-conditional hardness in restricted models. Before adding more on the background, we give the formal definitions of the two problems.

###### Definition 1.1.

(-Consensus Splitting) An instance of -Consensus Splitting with measures and agents consists of non-atomic probability measures on the interval , which we denote by , for . The goal is to split the interval, via at most cuts, into subintervals and distribute them to the agents so that for every two agents and every measure , we have , where are the unions of all intervals receive, respectively.

For any allocation of the interval to the agents, define the absolute discrepancy as . This is the maximum difference, over all measures, between the shares of two distinct agents. A valid solution for the -Consensus Splitting problem is thus a set of at most cuts on and a partition of the resulting intervals among the agents so that the absolute discrepancy is at most . The notion of absolute discrepancy for the necklace problem is defined analogously.

The -Consensus Splitting problem has a solution for every instance, even if , as proved in . The proof is non-constructive, that is, it does not yield an efficient algorithm for producing the cuts and the partition for a given input. The -Consensus Splitting problem was first mentioned more than 70 years ago in . In  it is called the Consensus--division problem.

For , the problem is closely related to the Hobby-Rice Theorem . In , as well as in  and , Filos-Ratsikas, Goldberg and their collaborators consider the case . They call this version of the problem the -Consensus Halving problem, a terminology that we adapt here. Some of the results we present are proved only for -Consensus Halving, but can be generalized for -Consensus Splitting, as discussed in Section 6.

###### Definition 1.2.

(Necklace Splitting) An instance of Necklace Splitting for colors and agents consists of a set of beads ordered along a line, where each bead is colored by exactly one color . The goal is to split the necklace, via at most cuts made between consecutive beads into intervals and distribute them to the agents so that for each color , every agent gets either or beads of color , where is the number of beads of color .

Note that this definition is slightly broader than the one given in , where it is assumed that is divisible by for all . However, as shown in , these two forms of the Necklace Splitting problem are equivalent. As in the case with -Consensus Splitting, we call the special case of two agents the Necklace Halving problem.

The existence of a solution for the Necklace Splitting problem was proved, using topological arguments, first for agents in  (see also  for a short proof), and then for the general case of agents in . A more recent combinatorial proof of this existence result appears in . As in the case with -Consensus Splitting, these proofs are non-constructive. The Necklace Halving problem is first discussed in . The problem of finding an efficient algorithmic proof of Necklace Splitting is mentioned in .

Recently, there have been several results regarding the hardness of the -Consensus Halving and the Necklace Halving problems. These are discussed in the next subsection.

### 1.2 Hardness and Approximation

PPA and PPAD are two complexity classes introduced in the seminal paper of Papadimitriou, . Both of these are contained in the class TFNP, which is the complexity class of total search problems, consisting of all problems in NP where a solution exists for every instance. A problem is PPA-complete if and only if it is polynomially equivalent to the canonical problem LEAF, described in . Similarly, a problem is PPAD-complete if and only if it is polynomially equivalent to the problem END-OF-THE-LINE. A problem is PPA-hard or PPAD-hard if the respective canonical problem is polynomially reducible to it. A number of important problems, such as several versions of Nash Equilibrium  and Market Equilibrium , have been proved to be PPAD-complete. It is known that PPAD PPA. Hence, PPA-hardness implies PPAD-hardness.

Filos-Ratsikas and Goldberg showed that the -Consensus Halving problem, first for inversely exponential  then for inversely polynomial in the number of measures , as well as Necklace Halving, are PPA-hard problems. Furthermore, in a subsequent paper with Frederiksen and Zhang , they showed that there exists a constant for which -Consensus Halving is PPAD-hard. Our main objective here is to find efficient approximation algorithms for these problems.

### 1.3 Our contribution

We consider approximation algorithms for two versions of the problem, which we call type 1 approximation and type 2 approximation, respectively. The first one is in the context of the -Consensus Splitting problem with the aim of providing strictly positive measure of each type to each of the agents using at most cuts. In the second one, for both -Consensus Halving and Necklace Halving, we allow the algorithm to make more than cuts, and expect a proper solution. A proper solution is a finite set of cuts and a distribution of the resulting intervals to the agents so that the absolute discrepancy is at most (or at most in the case of Necklace Splitting). The objective is to minimize the number of cuts the algorithm makes. Type 2 approximation has been considered earlier in  and .

In addition to approximation, we also consider hardness in a restricted model, namely the online model, discussed in detail in Section 2. In the online model, the hardness is measured by the minimum number of cuts needed to produce a proper solution. Lower bounds on the number of cuts needed in this model provide a barrier for what online algorithms can achieve.

Some of our ideas for finding deterministic type 2 approximation algorithms are inspired by papers in Discrepancy Minimization, such as , ,  and . In , the terminology refers to the Balancer as the entity with the designated task of minimizing the absolute discrepancy between agents. We adopt the same terminology here. Thus, the Balancer has the role of an algorithm that makes cuts and assigns the resulting intervals to agents in order to achieve a proper solution for either -Consensus Splitting or Necklace Splitting.

Our main algorithmic results are summarized in the theorems below. The upper and lower bounds obtained for the online model appear in the table at the end of this subsection.

###### Theorem 3.1.

There exists an algorithm that, given , an instance of -Consensus Halving for agents, and , an oracle for that for any , and index , finds so that (if such a exists), makes at most cuts on the interval , distributing the resulting intervals to the agents so that each one gets at least of each measure. The algorithm makes oracle calls.

It is worth noting that this algorithm can be implemented online and does not require any assumption on the behavior of the measure functions . In , Filos-Ratsikas et al. provide an efficient algorithm for the case that gives both agents at least of each measure, under the assumption that each one of the measures is uniform in some subinterval of and is on the rest of . The next result deals with online algorithms. The precise online model is discussed in the next section.

###### Theorem 4.1.

There exists an efficient, deterministic, online algorithm that, given , an instance of -Consensus Halving, provides a proper solution, making cuts on the interval .

###### Theorem 4.2.

There exists an efficient, deterministic, offline algorithm that, given , an instance of -Consensus Halving, and an oracle for as in Theorem 3.1, with the extra abilities to answer queries about the sum of all the measures and to answer queries of the form for any interval and measure , provides a proper solution, making at most cuts on the interval .

Remark: The offline algorithm in Theorem 4.2 provides a far lower number of cuts than the algorithm of Theorem 4.1. It is interesting to note that for constant, this algorithm makes cuts and obtains absolute discrepancy while doing so with cuts is a PPAD-hard problem .

These two algorithms for type 2 approximation for -Consensus Halving are adaptable to the Necklace Halving problem. Throughout the paper, for Necklace Halving, we use the notation where is the number of beads of color . The results below bound the number of cuts these adapted algorithms make to reach a proper solution.

###### Theorem 4.3.

There exists an efficient, deterministic, online algorithm that, given , an instance of Necklace Halving, provides a proper solution, making at most cuts.

###### Theorem 4.4.

There exists an efficient, deterministic, offline algorithm that, given , an instance of Necklace Halving, provides a proper solution, making at most cuts.

In  and  the authors describe offline algorithms for an approximation version of Necklace Halving and for -Consensus Halving, making cuts. Our results here improve these algorithms significantly.

The algorithmic results in the online model, and the nearly matching lower bounds we establish appear in the table below. Note that the algorithms, for both the -Consensus Halving and Necklace Halving problems are optimal up to constant factors for any fixed constant . In the lower bounds for Necklace Halving, we always assume that for all .

 Problem n=2 measures n≥3,n=O(1) measures n measures (general case) Online ϵ-Consensus Halving, upper bound O(1ϵ2) O(1ϵ2) O(nlognϵ2) Online ϵ-Consensus Halving, lower bound Ω(1ϵ) Ω(1ϵ2) Ω(nϵ2) Online Necklace Halving, upper bound O(m2/3) O(m2/3) O(m2/3⋅n(logn)1/3) Online Necklace Halving, lower bound Ω(√m) Ω(m2/3) Ω(n⋅m2/3)

The structure of the rest of the paper is as follows: in Section 2 we describe the computational models for the offline and online versions. In Sections 3 and 4 we present the algorithms for type 1 approximation and type 2 approximation, respectively. Subsection 4.1 contains the online algorithms, while subsection 4.2 contains the offline algorithms. Section 5 contains the lower bounds for the online model. In subsection 5.1 we deal with Online -Consensus Halving, and in subsection 5.2 with Online Necklace Halving. The final Section 6 contains remarks about possible extensions and open problems. To simplify the presentation we omit all floor and ceiling signs throughout the paper whenever these are not crucial. All logarithms are in base , unless otherwise specified.

## 2 Computational models and online versions

We first present the offline computational models, and then introduce the online versions for both problems and their corresponding computation models. In total, we have four models, one corresponding to each of the combinations -Consensus Splitting/Necklace Splitting and online/offline. Each of these four models pertains to both types of approximation.

The input for Necklace Splitting, for an instance with agents and colors, consists of a series of indices, each one taking a value in , which represents the color of the respective bead. The runtime is, as usual, the number of basic operations the algorithm makes to provide a solution.

For -Consensus Halving, we have an oracle that answers two types of queries. The first type takes as input a measure index , a positive quantity , and a starting point and returns the smallest point so that if such a point exists or otherwise. The oracle can also take as input a starting point and positive quantity and return the smallest so that . The second type of query takes as input two points and an index and returns the value . In terms of runtime, we consider both the number of oracle queries made and the computation done besides the queries. We seek algorithms that are efficient in terms of both queries and computation.

It is worth mentioning that this computational model is not exactly the one used in , but the two models are polynomially equivalent. Next we discuss the online models, starting with Necklace Splitting. The parameters , and for are given in advance. The beads are revealed one by one in the following way: for integral , at time the Balancer receives the color of bead number and is given the opportunity to make a cut between beads and , where this decision is irreversible. If a cut is made, and is the newly created interval, the Balancer also has to choose immediately the agent that gets , before advancing to time .

The notion of time appears in the model for Online -Consensus Splitting too. Here moves from to . Since a continuous motion is an unpractical computational model, the set of possible values of where cuts are allowed is a discrete set of values. This set consists of a sequence of points , where . At time , the Balancer receives access to the oracle on which can provide the values of for all and each of the measures , and has to make the irreversible decision of whether or not to cut at . If he chooses to cut, and is the newly created interval, the Balancer also needs to decide on the spot which agent gets . After this decision is made, advances to . Note that this means that cuts can only be made at points .

In order to avoid pathological constructions where there is too much measure in a small subinterval of it is needed to set an upper bound for each quantity . We set this upper bound to be for and for general . Therefore, in the online model the values of the measures are provided on all members of a partition of into polynomially many subintervals, where every measure of each subinterval is not too large.

## 3 Positive measures

In this section, we present the proof of the result for type 1 approximation:

Proof of Theorem 3.1:

The algorithm traverses the interval once and makes marks on it, creating marked intervals. Then, it chooses at most of these marks for the cuts. More precisely, in the first stage, the algorithm uses oracle calls to determine the points in where the marks need to be made, and in the second stage determines at which marks to make cuts.

Let be the point on the interval up to which we have traversed so far. Whenever we make a mark, the interval between the previous mark and the one we just made is called a marked interval, and it receives a label corresponding to one of the measures, according to a rule specified next. For each , call measure active if no more than of the marked intervals got label . If a measure is not active at a certain point, ignore it for the rest of the traversal. When none of the measures are still active, stop the first stage of the algorithm. At the beginning, , and all the measures are active.

Suppose we are at a certain point , either the starting point or some marked point, and there is at least one active measure. Let be the smallest real number so that for some that is an active measure. Mark the point . The marked interval receives label . Keep going by updating to be until either all measures become inactive, or we run out of measure and there is no that satisfies the condition above.

We next prove that in this first phase, the algorithm does not run out of measure, and it finishes when no measure is active, hence making marks. Indeed, suppose that when the algorithm stops, measure is still active. Let be the marked points until then. Since each measure gets at most marked intervals labelled with its index, we have that . Because label has always been active, it follows that and for each , . This leads to , which means that , contradicting the fact that we have run out of measure.

Next, we present the second stage of the algorithm, which chooses which ones of the marks become cuts. To give each of the agents of each measure, it is enough to give, for each , one of the marked intervals of label to each agent. In other words, it suffices to split the labeled intervals evenly among the agents. This is equivalent to the Necklace Splitting Problem for colors and agents, where there are exactly beads of each color. Hence, for the second stage of the algorithm, it is enough to prove the following:

###### Lemma 3.2.

There exists an efficient algorithm that solves any instance of Necklace Splitting for colors and agents where there are exactly beads of each color, making at most cuts.

Proof of Lemma 3.2:

It thus follows that with this allocation rule each agent gets exactly bead of each color. To prove the upper bound on the number of cuts, note that for each , we never cut right before the first bead of color that appears on the necklace. Hence, there are exactly beads (besides the very first one) with no cut right before them. Since there are at most points between consecutive beads the algorithm makes exactly cuts.

Observe that the algorithm is polynomial in . The first stage makes marks, each of which is done by finding the smallest so that . To find this , we call the oracle , on all of the active measures , and take the smallest found. Hence, by the time the first stage of the algorithm is over, we have made oracle calls and non-oracle operations. The proof of Lemma 3.2 shows that the second stage is also efficient, completing the proof.

Remarks:

• The algorithm described in the proof of Lemma 3.2 traverses the necklace only once and distributes newly created intervals right after making cuts. Hence, this algorithm works in the online version for Necklace Splitting as well (for beads of each color).

• Since the algorithm that decides where to make cuts works in the online model, the algorithm for type 1 approximation can be adapted to the online model as well.

• Note that tends to as tends to infinity, that is, even if there are only two agents the algorithm does not ensure a constant amount bounded away from of each of the measures to each of them. In the next section we describe efficient algorithms that allow more cuts and achieve better discrepancy.

## 4 Upper bounds

### 4.1 Online algorithms

#### 4.1.1 The ϵ-Consensus Halving Problem

Proof of Theorem 4.1:

We describe an online algorithm that runs in polynomial time (in , and the input describing the measures) and achieves discrepancy at most . The algorithm makes cuts. Note that by the result of Filos-Ratsikas and Goldberg, with only cuts, the problem of obtaining discrepancy , for inversely-polynomial in , is PPA-complete. By allowing more cuts we can get a poly-time algorithm that achieves this discrepancy, and it even works online. It is worth mentioning that this algorithm is a derandomization of a simple randomized algorithm which cuts the interval into pieces of small -measure for all and then assigns them randomly and uniformly to the two agents.

Put . Traverse the interval, and whenever after the last cut made at a point we reach a point so that is valued at least by some measure and at most by all other measures, we make a cut. Note that if we had access to the oracle on described in the beginning, we could have simply set . However, in the online model, the Balancer has no access to the oracle on all of , and can only make cuts at the prescribed points . Hence, in this model it might happen that the Balancer, having made the last cut at , has to decide at time whether to cut or not, knowing that for every but that for some . When this occurs, the Balancer simply cuts at . By the assumption that for every measure , it follows that in this case .

To decide about the allocation of the interval created we define, for each measure , a potential function , and a function that is an upper bound of and is computable efficiently. The variable here will denote, throughout the algorithm, the index of the last cut made. Define and . After each cut at step , the allocation of the interval created is chosen so as to minimize .

The functions are defined by considering an appropriate probabilistic process. For each , let

be the random variable whose value is the difference between the

-th measure of the share of agent 1 and that of agent 2 if after each cut the interval created is assigned to a uniform random agent. Let be if the ’th interval is assigned to agent 1 and otherwise. Therefore , where is the total number of cuts made and , where is the ’th created interval. The distribution defining is the one where each is or randomly, uniformly and independently. The function is defined as follows

 ϕi(t)=E[eλXi+e−λXi2|ϵ1,ϵ2,...,ϵt]

This is a conditional expectation, where the conditioning is on the allocation of the first intervals represented by , and where . (The specific choice of will become clear later). Since , where is the valuation of the ’th interval by measure , we have that

 ϕi(t)=E[eλ∑jϵjaj+e−λ∑jϵjaj2|ϵ1,ϵ2,...,ϵt]

.

The function is defined in a way ensuring it upper bounds the function . It is convenient to split each into

 12E[eλ∑jϵjaj|ϵ1,..,ϵt]+12E[e−λ∑jϵjaj|ϵ1,..,ϵt].

For simplicity, denote the first term and the second term . Therefore

 ϕ′i(t)=12E[eλ∑jϵjaj|ϵ1,..,ϵt]=12eλ∑tj=1ϵjaj⋅∏j≥t+1(eλaj+e−λaj2)
 =12eλ∑tj=1ϵjaj⋅∏j≥t+1cosh(λaj)

A similar expression exists for . Define and . By the discusson above

 ϕi(t)=eλut+e−λut2∏j≥t+1cosh(λaj).

Using the well-known inequality that , it follows that

 ϕi(t)≤eλut+e−λut2eλ2∑j≥t+1a2j/2.

By the way the cuts are produced for all , and hence

 ∑j=t+1a2j≤maxj≥t+1(|aj|)⋅(∑j≥t+1aj)≤g⋅(∑j≥t+1aj)=g(1−st).

Therefore

 ϕi(t)≤eλut+e−λut2eλ2g(1−st)/2.

Define to be the above upper bound for , that is

 ψi(t)=eλut+e−λut2eλ2g(1−st)/2.

Note that can be easily computed efficiently at time , since and (as well as and ) are known at this point.

Recall that and . At time , the online algorithm chooses in order to minimize . We next show that this ensures that is (weakly) decreasing in the variable . To do so, it is enough to prove that , where denotes the value of if we choose . It suffices to show that for every measure , .

We proceed with the proof of this inequality. To do so, note that

 ψi(t+1|ϵt+1=1)=eλ(ut+at+1)+e−λ(ut+at+1)2eλ2g(1−st−at+1)/2,

and

 ψi(t+1|ϵt+1=−1)=eλ(ut−at+1)+e−λ(ut−at+1)2eλ2g(1−st−at+1)/2.

Therefore

 ψi(t+1|ϵt+1=1)+ψi(t+1|ϵt+1=−1)2=eλut+e−λut2⋅eλat+1+e−λat+12eλ2g(1−st−at+1)/2
 ≤eλut+e−λut2⋅eλ2gat+1/2eλ2g(1−st−at+1)/2=eλut+e−λut2⋅eλ2g(1−st)/2=ψi(t),

as needed.

The process of selecting so that is minimized is performed for every a new cut. Note that the total number of cuts is bounded by . Hence, this process takes polynomial time in the quantities mentioned, and it splits the interval among the two agents.

Next, we prove that the absolute discrepancy is smaller than . Let be the number of cuts made. Note that at , is not an expectation, but a realization of the random process determined by following the algorithm. At the end , where is the difference between the amount of measure of agent 1 and the amount of measure of agent 2 at the end of the algorithm. If for some measure the absolute value of the discrepancy is , then , which means that . Since is decreasing in , it is enough to prove that . Note that , hence it suffices to prove that , which is equivalent to . Recall that . The inequality becomes , which is clearly true. Therefore, the algorithm obtains discrepancy at most , it is poly-time, online, and makes cuts. This completes the proof.

#### 4.1.2 Necklace Halving

In this subsection we use the previous algorithm to prove Theorem 4.3 about online Necklace Halving. Note first that if, say, the result is trivial, as less than cuts suffice to split the necklace into single beads, hence we may and will assume that .

Proof of Theorem 4.3: Given a necklace with beads of color for , where , construct an instance of -Consensus Halving as follows. Replace each bead of color by an interval of -measure and -measure for all . These intervals are placed next to each other according to the order in the necklace, and their lengths are chosen so that altogether they cover . During the online algorithm, the beads of the necklace are revealed one by one. Throughout the algorithm we call the beads that have not yet been revealed the remaining beads.

Without trying to optimize the absolute constants, define (). Our algorithm follows the one used in the proof of Theorem 4.1, but when the number of remaining beads of color becomes smaller than the algorithm makes a cut before and after each arriving bead of color , allocating it to the agent with a smaller number of beads of this type, where ties are broken arbitrarily. During the algorithm we call a color critical if the number of remaining beads of this color is smaller than , otherwise it is normal. Although the input is now considered as the interval with continuous measures on it, we allow only cuts between intervals corresponding to consecutive beads, and do not allow any cuts in the interiors of intervals corresponding to beads. Note that if , that is, , this is consistent with our definition of the online model for the -Consensus Halving Problem. Otherwise, color is critical from the beginning, and in this case we cut before and after every bead of color .

Starting at , as in the proof of Theorem 4.1, define, for each color , the upper bound functions for the potential, , and put , where the sum at every point is only over the normal colors . If the last cut is at point , the next cut is made before the last bead whose corresponding position in the -Consensus Halving instance has the property that for all . The newly created interval is then allocated on the spot according to the choice that minimizes .

This is done until some color becomes critical. Note that until this stage, since is (weakly) decreasing, the absolute discrepancy does not exceed , implying that the discrepancy in terms of beads on the necklace, for each color , does not exceed . When becomes critical, it stops contributing to the potential function (which as a result, becomes smaller). From this time on color is handled separately, the algorithm makes a cut before and after any occurrence of it and allocates it to the agent with a smaller number of beads of this color. As before, the newly created intervals of the beads of the other colors are allocated in order to minimize the potential .

It is clear that the potential here is a decreasing function of , as in the previous proof. Therefore whenever a color becomes critical the discrepancy in it in terms of beads is at most and as the number of remaining beads of this color is larger, the process will end with a balanced partition of the beads of each color between the agents, allocating to each of them either or of these beads.

To bound the number of cuts the algorithm makes call a cut forced if it is made before or after a bead of color when is critical. The number of non-forced cuts is clearly at most . The number of forced cuts cannot exceed . Hence, the total number of cuts made is .

Since the algorithm clearly works online this completes the proof of the theorem.

### 4.2 Offline algorithms

#### 4.2.1 The ϵ-Consensus Halving Problem

Proof of Theorem 4.2: Given non-atomic measures on the interval we describe an efficient algorithm that cuts the interval in at most places and splits the resulting intervals into two collections so that for all . Note, first, that if the collection has the right amount according to each of the measures , so does the collection , hence it is convenient to only keep track of the intervals assigned to . For each interval denote . Thus . Using cuts split into intervals so that for all . For each interval let denote the

-dimensional vector

.

By a simple linear algebra argument, which is a standard fact about the properties of basic solutions for Linear Programming problems, one can write the vector

as a linear combination of the vectors with coefficients in , where at most of them are not in . For completeness, we include the proof, which also shows that one can find coefficients as above efficiently. Start with all coefficients being . Call a coefficient which is not in floating and one in fixed. Thus at the beginning all coefficients are floating. As long as there are more than floating coefficients, find a nontrivial linear dependence among the corresponding vectors and subtract a scalar multiple of it which keeps all floating coefficients in the closed interval shifting at least one of them to the boundary , thus fixing it.

This process clearly ends with at most floating coefficients. The intervals with fixed coefficients with value are now assigned to the collection and those with coefficient to . The rest of the intervals remain. Split each of the remaining intervals into two intervals, each with -value . We get a collection of intervals, each of them has the coefficient it inherits from its original interval. Each such interval defines an -vector as before, and the sum of these vectors with the corresponding coefficients (in ) is exactly what the collection should still get to have its total vector of measures being .

As before, we can shift the coefficients until at most of them are floating, assign the intervals with coefficients to the collections and keep at most intervals with floating coefficients. Split each of those into two intervals of -value each and proceed as before, until we get at most intervals with floating coefficients, where the -value of each of them is at most . This happens after at most rounds. In the first one, we have made cuts and in each additional round at most cuts. Thus the total number of cuts is at most .

From now on we add no additional cuts, and show how to allocate the remaining intervals to . Let denote the collection of intervals with floating coefficients. Then and for each . This means that

 n∑i=1∑I∈Iμi(I)≤nϵ/2.

It follows that there is at least one measures so that

 ∑I∈Iμi(I)≤ϵ/2.

We can think of the remaining floating coefficients as the fraction of each corresponding interval that agent 1 owns. Observe that for any assignment of the intervals to the two collections , the total measure of (and hence also of ) lies in , as this measure with the floating coefficients is exactly and any allocation of the intervals with the floating coefficients changes this value by at most . We can thus ignore this measure, for ease of notation assume it is measure number , and replace each measure vector of the members in by a vector of length corresponding to the other measures. If (that is, if ), then it is now possible to shift the floating coefficients as before until at least one of them reaches the boundary, fix it assigning its interval to or as needed, and omit the corresponding interval from ensuring its size is at most . This means that for the modified the sum

 n−1∑i=1∑I∈Iμi(I)≤(n−1)ϵ/2.

Hence there is again a measure , so that

 ∑I∈Iμi(I)≤ϵ/2.

Again, we may assume that , observe that measure will stay in its desired range for any future allocation of the remaining intervals, and replace the measure vectors by ones of length . This process ends with an allocation of all intervals to and , ensuring that at the end for all , . These are the desired collections. It is clear that the procedure for generating them is efficient, requiring only basic linear algebra operations. This completes the proof of the theorem.

Remark:  The above Theorem shows that cuts suffice for (which ensures both agents get at least of each measure). A simple different choice of parameters implies that cuts suffice in order to ensure that each of the two collections satisfies for all and .

Remark:  The argument can be extended to splitting into nearly fair collections of intervals. One way to do it is to generate the collections one by one. See section 6 for more details.

#### 4.2.2 Necklace Splitting

In this subsection, we present the Necklace Splitting algorithm obtained by adapting the algorithm in the proof of Theorem 4.2.

Proof of Theorem 4.4:

Convert the given necklace into an instance of -Consensus Halving as done in the proof of Theorem 4.3 and mark the places of the cuts made by the algorithm in the proof of Theorem 4.2 applied to the resulting input with . The intervals separated by the marks are partitioned by the algorithm into two collections forming a solution of the continuous problem. Note that the continuous solution would give discrepancy at most in terms of beads if we were allowed to cut at the marked points. The only subtle point is that some of the marks may be in the interior of small intervals corresponding to beads, and we wish to cut only between beads.

Call a mark between two consecutive beads fixed and call the other marks floating. We first show how to shift each of the floating marks so that the absolute discrepancy does not increase beyond and all but at most one mark for each color are made between two consecutive beads. To do so, if there exists a floating mark between two intervals assigned to the same agent eliminate it and merge the two intervals. If there is no such mark and there are at least two floating marks in the interior of intervals corresponding to color , we shift both of them by the same amount in the appropriate way until at least one of them becomes fixed. If during this simultaneous shift one of the two marks arrives in a spot occupied by a different mark, we stop the shift and discard one of the duplicate marks. Note that the quantities the two agents receive do not change.

This procedure reduces the number of floating marks until there is at most one floating mark for each color. If there is such a floating mark, round it to the closest boundary between beads noting that this can increase the absolute discrepancy by at most . Therefore, once all marks are fixed, the absolute discrepancy is . Since all the cuts are between consecutive beads, this discrepancy has to be an integer, and thus it is at most , as desired. The number of cuts made is .

## 5 Lower bounds

In this section we present the lower bounds for -Consensus Halving and Necklace Halving in the online model.

### 5.1 The ϵ-Consensus Halving Problem

#### 5.1.1 Punishment and its application

In this subsection we prove a lower bound on the number of cuts the Balancer needs to make in the online model in order to obtain a proper solution for . The proof relies on the idea of punishing the Balancer if he allows too much discrepancy on one measure at any point. The next claim is the crux of the argument. It shows that if we keep one measure unused as a punishing threat, we cannot have, after any cut made, discrepancy exceeding on any other measure. Indeed, otherwise, there exists a punishment that prevents the Balancer from achieving discrepancy in the end with any finite number of cuts. Recall that the discrepancy on measure during the algorithm is the difference between the -th measure of the share of agent 1 and that of agent 2.

###### Lemma 5.1.

Denote by the discrepancy on measure after the last cut made, and assume that measure of the shares allocated so far is . If , then there exists an adversarial input so that no finite number of cuts can achieve an absolute discrepancy at most at the end.

Proof of Lemma 5.1:

Assume, without loss of generality, that is positive, thus . Let be the point of the last cut made in . Distribute the rest of measures uniformly on . Suppose that a finite number of cuts, at points is made and the resulting intervals are allocated to the agents with maximum discrepancy below . Denote and let be the sign attributed to each interval, defined to be if it is given to agent and if given to agent . Since the discrepancy on measure is below in absolute value, it follows that , where is the length of . However, the condition on measure yields the inequality , where is the remaining quantity of measure after the cut at . Since , this leads to

 ϵ>x+μt−1k∑i=0ϵil(Ji)>x−μϵ>ϵ,

A simple application of this lemma implies a lower bound for measures.

###### Theorem 5.2.

There exists an adversarial input that forces any deterministic algorithm for Online -Consensus Halving with measures to make cuts in order to obtain a proper solution.

Proof of Theorem 5.2: Keep measure reserved for a potential punishment if the discrepancy on measure ever exceeds in absolute value. At the beginning, and after each cut, as long as , set to be uniform with density on the portion that follows, until the next cut made. If the Balancer waits for length larger than before cutting, then when the next cut is made and the resulting interval allocated, the new discrepancy on measure is at least . By the previous lemma in this case the adversary can ensure that the Balancer will not be able to obtain the desired bound for the maximum discrepancy. Hence, the distance between any two consecutive cuts is less than , yielding the required lower bound.

Note that the upper bound provided by our online algorithm for measures is larger by a factor of than this lower bound. We next show that with one additional measure we can obtain a tight lower bound, up to a constant factor.

#### 5.1.2 The case n≥3

We first prove that for measures we have a tight lower bound up to a constant factor. For measures this will imply a lower bound which is tight up to a factor.

###### Theorem 5.3.

There exists an adversarial input that forces any deterministic algorithm for Online -Consensus Halving for measures to make cuts.

Proof of Theorem 5.3: The proof applies the punishing argument given in Lemma 5.1. Measure number will be kept for possible punishment. We start with some notation and definitions. After each cut at point , let denote the discrepancies (positive or negative) for measures and respectively. The state after each such cut is represented by the two dimensional vector . After a new cut is made and a new interval is formed, we view the interval as a two dimensional vector , where . By Lemma 5.1, we may and will assume that is in the square after each cut. The adversary tries to reveal online an input that forces the Balancer to make many cuts to ensure that after each cut lies in this square. In order to analyze the progress we maintain during the algorithm a potential function , which is defined in what follows.

After each cut and interval allocation made by the Balancer, the adversarial input will consist of measures distributed according to the proportions and . The choice of will be made in order to ensure that both and are large for any future large interval. Note that if there is not enough measure of type or left, it may be needed to limit the length of the interval in which the measures will be distributed according to the above proportions. It is convenient to define, after any cut made at point , the feasible prefix as the interval , with being the maximum real so that the adversary can still distribute the measures in according to the required proportions without running out of measure.

The potential function is defined by

 M(x,y)=x2+y2+5ϵx−5ϵy.

Let be the vector corresponding to the new cut made, following the cut made at time , where . Put . Assuming that are proportional to , which is a vector with positive coordinates as , we get that

 M(v+p)−M(v)=p21+p22+2xp1+2yp2+5ϵp1−5ϵp2=p21+p22+12[p1⋅(10ϵ+4x)−p2⋅(10ϵ−4y)]=
 =p21+p22≥12α2

and similarly,

 M(v−p)−M(v)=p21+p22+12[−p1⋅(10ϵ+4x)+p2⋅(10ϵ−4y)]=p21+p22≥12α2

With this in mind, starting at the point of the last cut, until the next cut is made, put , and define the measures