1 Introduction
The complexity classes PPA and PPAD were introduced in a seminal paper of Papadimitriou [60]
in 1994, in an attempt to classify several natural problems in the class TFNP
[58]. TFNP is the class of total search problems in NP for which a solution exists for every instance, and solutions can be efficiently verified. Various important problems were subsequently proven to be complete for the class PPAD, such as the complexity of many versions of Nash equilibrium [19, 14, 26, 59, 63, 15], market equilibrium computation [18, 12, 71, 16, 65], and others [24, 43]. As evidence of computational hardness, PPAcompleteness is stronger than PPADcompleteness, i.e., . Indeed, Jeřábek [40] shows that it indicates cryptographic hardness in a strong sense: [40] gives a randomised reduction from FACTORING to PPAcomplete problems. This is not known for PPADcomplete problems. For more details, and the significance of PPAcompleteness, we refer the reader to the related discussion in [29]. PPA is the class of problems reducible to Leaf (Definition 1), and a PPAcomplete problem is polynomialtime equivalent to Leaf.Definition 1
An instance of the problem Leaf consists of an undirected graph whose vertices have degree at most 2; has vertices represented by bitstrings of length ; is presented concisely via a circuit that takes as input a vertex and outputs its neighbour(s). We stipulate that vertex has degree 1. The challenge is to find some other vertex having degree 1.
Complete problems for the class PPA seemed to be much more elusive than PPADcomplete ones, especially when one is interested in “natural” problems, where “natural” here has the very specific meaning of problems that do not explicitly contain a circuit in their definition. Besides Papadimitriou [60], other papers asking about the possible existence of natural PPAcomplete problems include [36, 13, 19, 22]. In a recent precursor [29] to the present paper we identified the first example of such a problem, namely the approximate Consensushalving problem, dispelling the suspicion that such problems might not exist. In this paper we build on that result and settle the complexity of two natural and important problems whose complexity status were raised explicitly as open problems in Papadimitriou’s paper itself, and in many other papers beginning in the 1980s. Specifically, we prove that Necklacesplitting (with two thieves, see Definition 2) and Discrete Ham Sandwich are both PPAcomplete.
Definition 2 (Necklace Splitting)
In the Necklacesplitting problem there is an open necklace with beads of colour , for . An “open necklace” means that the beads form a string, not a cycle. The task is to cut the necklace in places and partition the resulting substrings into collections, each containing precisely beads of colour , .
In Definition 2, is thought of as the number of thieves who desire to split the necklace in such a way that the beads of each colour are equally shared. In this paper, usually we have and we refer to this special case as Necklacesplitting.
Definition 3 (Discrete Ham Sandwich)
In the Discrete Ham Sandwich problem, there are sets of points in
dimensions having integer coordinates (equivalently one could use rationals). A solution consists of a hyperplane that splits each set of points into subsets of equal size (if any points lie on the plane, we are allowed to place them on either side, or even split them arbitrarily).
In Definition 3, each point set represents an ingredient of the sandwich, which is to be cut by a hyperplane in such a way that all ingredients are equally split.
The necklacesplitting problem was introduced in a 1982 paper of Bhatt and Leiserson ([8], Section 5), where it arose in the context of VLSI circuit design (the version defined in [8] is the 2thief case proved PPAcomplete in the present paper). In 1985 and 1986, the 2thief case was shown to have guaranteed solutions (as defined in Definition 2) by Goldberg and West [34] and Alon and West [5] and then in 1987, Alon [2] proved existence of solutions for thieves as well. Early papers that explicitly raise its complexitytheoretic status as an open problem are Goldberg and West [34] and Alon [3, 4]. Subsequently, the necklacesplitting problem was found to be closely related to “paintshop scheduling”, a line of work in which several papers such as [53, 55, 54] explicitly mention the question of the computational complexity of necklacesplitting. Meunier [53] notes that the search for a minimum number of cuts admitting a fair division (which may be smaller than the number that is guaranteed to suffice) is NPhard, even for a subclass of instances of the 2thief case. (That is a result of Bonsma et al. [10], for the “paint shop problem with words”, equivalent to 2thief Necklacesplitting with 2 beads of each colour.)
In [29], we showed Necklacesplitting to be computationally equivalent to Consensushalving for inversepolynomial precision parameter , but the PPAcompleteness of Consensushalving was only shown for inverseexponential . [29] established PPADhardness of Necklacesplitting, applying the main result of [28]. In this paper, we prove that Consensushalving is PPAcomplete for inversely polynomial, thus obtaining the desired PPAcompleteness of Necklacesplitting. While some structural parts of our reduction are extensions of those presented in [29], obtaining the result for inversepolynomial precision is much more challenging, as the construction needs to move to a highdimensional space (rather than the twodimensional space which is sufficient for the result in [29]). We highlight the main new techniques that we have developed in this paper in Section 2.1, where we provide an overview of the reduction. Our PPAcompleteness result gives a convincing negative answer to Meunier and Neveu’s questions [54] about possible polynomialtime solvability or membership of PPAD for Necklacesplitting; likewise it runs counter to Alon’s cautious optimism at ICM 1990 ([4], Section 4) that the problem may be solvable in polynomial time.
The Ham Sandwich Theorem [68] is of enduring and widespread interest due to its colourful and intuitive statement, and its relevance and applications in topology, social choice theory, and computational geometry. Roughly, it states that given measures in Euclidean space, there exists a hyperplane that cuts them all simultaneously in half. Early work on variants and applications of the theorem focused on nonconstructive existence proofs and mostly did not touch on the algorithmics. A 1983 paper by Hill [37] hints at possible interest in the corresponding computational challenge, in the context of a related land division problem. The computational problem (and its complexity) was first properly studied in a line of work in computational geometry beginning in the 1980s, for example [25, 46, 47, 50]. The problem envisages input data consisting of sets of points in Euclidean space, and asks for a hyperplane that splits all point sets in half. (The problem Discrete Ham Sandwich (Definition 3) as named in [60] is essentially this, with set equal to to emphasise that we care about the highdimensional case.) In this work in computational geometry, the emphasis has been on efficient algorithms for small values of ; Lo et al. [47] improve the dependence on but it is still exponential, and the present paper shows for the first time that we should not expect to improve on that exponential dependence. More recently, Grandoni et al. [35] apply the “Generalized Ham Sandwich Theorem” to a problem in multiobjective optimisation and note that a constructive proof would allow a more efficient algorithm to emerge. The only computational hardness result we know of is Knauer et al. [44] who obtain a hardness result for a constrained version of the problem; [44] points out the importance of the computational complexity of the general problem. The PPAcompleteness result of the present paper is the first hardness result of any kind for Discrete Ham Sandwich, and as we noted, is a strong notion of computational hardness. Karpic and Saha [42] showing a form of equivalence between the Ham Sandwich Theorem and BorsukUlam, explicitly mention the possible PPAcompleteness of Discrete Ham Sandwich as an “interesting and challenging open problem”.
We prove the PPAcompleteness of Discrete Ham Sandwich via a simple reduction from Necklacesplitting.
Ours is not the first paper to develop the close relationship between the two problems:
Blagojević and Soberón [9] shows a generalisation,
where multiple agents may share a “sandwich”, dividing it into convex pieces.
Further papers to explicitly point out their computational complexity as open problems include
Deng et al. [23] (mentioning that both problems “show promise to be complete for PPA”),
Aisenberg et al. [1], and Belovs et al. [7].
Further Related Work: The class TFNP was defined in [58] and several of its subclasses were studied over the years, such as PPA, PPAD and PPP [60], PLS [41] and CLS [20]; here we focus on the most recent results. As we mentioned earlier, in [29] we identified the first natural complete problem for PPA, the approximate Consensushalving problem. In a recent paper, Sotiraki et al. [67] identified the first natural problem for the class PPP, the class of problems whose totality is established by an argument based on the pigeonhole principle. For the class CLS, both Daskalakis et al. [21] and Fearnley et al. [27] identified complete problems (two versions of the Contraction Map problem, where a metric or a metametric are given as part of the input). In the latter paper, the authors define a new class, namely EOPL (for “End of Potential Line”), and show that it is a subclass of CLS. Furthermore, they show that two wellknown problems in CLS, the PMatrix Linear Complementarity Problem (PLCP), and finding a fixpoint of a piecewiselinear contraction map (PLContraction) belong to the class. The End of Potential Line problem of [27] is closely related to the End of Metered Line of [39].
2 Problems and Results
We present and discuss our main results, and in Section 2.1 we give an overview of the proof and new techniques, in particular with respect to the precursors [28, 29] to this paper.
Definition 4 (Consensus Halving [66, 29])
An instance incorporates, for , a nonnegative measure of a finite line interval , where each integrates to 1 and is part of the input. We assume that are step functions represented in a standard way, in terms of the endpoints of intervals where is constant, and the value taken in each such interval. We use the bit model (logarithmic cost model) of numbers. specifies a value also using the bit model. We regard as the value function held by agent for subintervals of .
A solution consists firstly of a set of cut points in (also given in the bit model of numbers). These points partition into (at most) subintervals, and the second element of a solution is that each subinterval is labelled or . This labelling is a correct solution provided that for each , , i.e. each agent has a value in the range for the subintervals labelled (hence also values the subintervals labelled in that range).
We assume without loss of generality that in a valid solution, labels and alternate. We also assume that the alternating label sequence begins with label on the lefthand side of (i.e. denotes the leftmost label in a Consensushalving solution).
The Consensushalving problem of Definition 4 is a computational version of the HobbyRice theorem [38]. Most of the present paper is devoted to proving the following theorem.
Theorem 2.1
Consensushalving is PPAcomplete for some inversepolynomial .
As mentioned in the introduction, in [29] it was proven that 2thief Necklacesplitting and Consensushalving for inverselypolynomial are computationally equivalent, i.e. they reduce to each other in polynomial time. Therefore, by [29] and the “in PPA” result proven in Section B.1, we immediately get the following corollary.
Theorem 2.2
Necklacesplitting is PPAcomplete when there are thieves.
If we knew that Necklacesplitting belonged to PPA for other values of , we could of course make the blanket statement “Necklacesplitting is PPAcomplete”. Alas, the proofs that Necklacesplitting is a total search problem for [2, 56] do not seem to boil down to the parity argument on an undirected graph! That being said, we do manage to establish membership of PPA for being a power of 2 (essentially an insight of [2]), see Section B of the Appendix for the details and a related discussion. Of course, the result strongly suggests that Necklacesplitting is a hard problem for other values of .
As it happens, the PPAcompleteness of Discrete Ham Sandwich follows straightforwardly, and we present that next. The basic idea of Theorem 2.3
of embedding the necklace in the moment curve appears already in
[62, 51] and [48], p.48.Theorem 2.3
Discrete Ham Sandwich is PPAcomplete.
Inclusion in PPA is shown in Section B.1 of the Appendix. For PPAhardness, we reduce from 2thief Necklacesplitting which is PPAcomplete by Theorem 2.2.
The idea is to embed the necklace into the moment curve . Assume all beads lie in the unit interval . A bead having colour located at becomes a point mass of ingredient of the ham sandwich located at . It is known that any hyperplane intersects the moment curve in at most points, (e.g. see [51], Lemma 5.4.2), therefore a solution to Discrete Ham Sandwich corresponds directly to a solution to Necklacesplitting, where the two thieves splitting the necklace take alternating pieces. (In the case, we may assume without loss of generality that they do in fact take alternating pieces).
A limitation to Theorem 2.3 is that the coordinates may be exponentially large numbers; they could not be written in unary. We leave it as an open problem whether a unarycoordinate version is also PPAcomplete. As defined in [60], Discrete Ham Sandwich stipulated that each of the sets of points is of size , whereas Definition 3 allows polynomialsized sets. We can straightforwardly extend PPAcompleteness to the version of [60] by adding “dummy dimensions” whose purpose is to allow larger sets of each ingredient; the new ingredients that are introduced, consist of compact clusters of point masses, each cluster in general position relative to the other clusters and the subspace of dimension that contains the points of interest.
Notation:
We use the standard notation to denote the set , and we also use to denote . We often refer to elements of as “labels” or “colours”. is usually used to denote a labelling function (so its codomain is ).
We let denote the domain of an instance of Consensushalving; if that instance has complexity then will be the interval , where is some number bounded by a polynomial in . Recall by Definition 4 that denotes the value function, or measure, of agent on the domain , in a Consensushalving instance. We also associate each agent with its own cut (recall that the number of agents and cuts is supposed to be equal) and we let denote the cut associated with agent .
We let be a polynomial that represents the number of “circuitencoders” that we use in our reduction (see Section 5.1); we usually denote it , dropping the from the notation.
Finally, denotes the cube (or “box”) .
Terminology
In an instance of Consensushalving, a valueblock of an agent denotes a subinterval of the domain where
possesses positive value, uniformly distributed on that interval. In our construction, valueblocks tend to be scattered rather sparsely along the domain.
2.1 Overview of the proof
We review the ground covered by the precursors [28, 29] to this paper, then we give an overview of the technical innovations of the present paper.
2.1.1 Ideas from [28, 29]
[28] established PPADhardness of Consensushalving. An arithmetic circuit can be encoded by an instance to Consensushalving, by letting each gate have a cut whose location (in a solution) represents the (approximate) value taken at that gate. Agents’ valuation functions ensure that values taken at the gates behave according to the type of gate. A “PPAD circuit” can then be represented using an instance of Consensushalving.
[29] noted that the search space of solutions to instances as constructed by [28], is oriented. A radical new idea was needed to encode the nonoriented feature of topological spaces representable by PPA. That was achieved by using two cuts to represent the coordinates of a point on a triangular region faving two sides identified to form a Möbius strip. (These cuts are the only ones that lie in a specific subinterval of the interval of a Consensushalving instance, called the “coordinateencoding (ce) region”. The two cuts are called the “coordinateencoding cuts”.) Identifying two sides in this way is done by exploiting the equivalence of taking a cut on the LHS of the ce region, and moving it to the RHS. In order to embed a hard search problem into the surface of a standard 2dimensional Möbius strip, it was necessary to work at exponentiallyfine resolution, which immediately required inverseexponential for instances of Consensushalving. In [29] we reduced from the PPAcomplete problem 2DTucker [1] (Definition 5 below), a search problem on an exponentialsized 2dimensional grid.
In [29], the rest of is called the “circuitencoding region” , and the cuts occurring within do the job of performing computation on the location of cuts in the ce region. The present paper retains this highlevel structure (Section 4.1). As in [29] we use multiple copies of the circuit that performs the computation, each in its own subregion of . Here we use copies where is a polynomial; in [29] we used 100 copies. Each copy is called a circuitencoder. The purpose of multiple copies is to make the system robust; a small fraction of copies may be unreliable: as in [29] we have to account for the possibility that one of the ce cuts may occur in the circuitencoding region, rendering one of the copies unreliable. We reuse the “double negative lemma” of [29] that such a cut is not too harmful. We also adapt a result of [29] that when a cut is moved from the one end to the other end of the ce region, this corresponds to identifying two facets of a simplex to form a Möbius strip.
[29] uses a sequence of “sensor agents” to identify the endpoints of intervals labelled and in the coordinateencoding region, and feed this information into the above mentioned circuitencoders, which perform computation on those values. As in [29] we use sensor agents. We obtain a simplification with respect to [29] which is that we do not need the gadgets used there to perform “bitextraction” (converting the position of a ce cut into boolean values). In [29], a solution to an instance of Consensushalving was associated with a sequence of 100 points in the Möbiussimplex (referred there as the “simplexdomain”), and the “averaging manoeuvre” introduced in [19] was applied. In this paper, for a polynomial , we sample a sequence of points in a more elegant manner, again exploiting the inversepolynomial precision of solutions that we care about.
2.1.2 Technical innovations
As in [29], we reduce from the PPAcomplete problem 2DTucker [1] (Definition 5). That computational problem uses an exponentiallyfine 2D grid, and (unlike [29]), in Section 3 we apply the snakeembedding technique invented in [14] (versions of which are used in [22, 23] in the context of PPA) to convert this to a grid of fixed resolution, at the expense of going from 2 to dimensions. The new problem, Variant highD Tucker (Definition 7) envisages a grid. Here, we design the snakeembedding in such a way that PPAcompleteness holds for instances of the highdimensional problem that obey a further constraint on the way the highdimensional grid is coloured, that we exploit subsequently. A further variant, New variant highD Tucker (Definition 8) switches to a “dual” version where a hypercube is divided into cubelets, and points in the hypercube are coloured such that interiors of cubelets are monochromatic. A pair of points is sought having equal and opposite colours and distant by much less than the size of the cubelets.
We encode a point in dimensions using a solution to an instance of Consensushalving as follows.
Instead of having just 2 cuts in the coordinateencoding region (as in [29]), suppose we ensure
that up to cuts lie there.
These cuts split this interval into pieces whose total length is constant,
so represent a point in the unit simplex (in [29], the unit 2simplex).
This “Möbiussimplex” (Definition 17; Figure 10)
has the further property that two facets are identified with each other in a way that
effectively turns the simplex into an dimensional Möbius strip.
In Section 5.2 we define a cruciallyimportant coordinate transformation (see Figure 11) with the following key properties

the transformation and its inverse can be computed efficiently, and distances between transformed coordinate vectors are polynomially related to distances between untransformed vectors;

at the two facets that are identified with each other, the coordinates of corresponding points are the negations of each other; our colouring function (that respects Tuckerstyle boundary conditions) has the effect that antipodal points get equal and opposite colours, and no undesired solutions are introduced at these facets.
This is the “smooth embedding” referred to in the abstract.
With the aid of the above coordinate transformation, we divide up the Möbiussimplex:

The twisted tunnel (Definition 23) is an inversepolynomially thick strip, connecting the two facets that are identified in forming the Möbiussimplex. It contains at its centre an embedded copy of the hypercube domain of an instance of New variant highD Tucker. Outside of this embedded copy, it is “coloured in” (using our new coordinate system) in a way that avoids introducing solutions that do not encode solutions of .

The Significant Region contains the twisted tunnel and constitutes a somewhat thicker strip connecting the two facets. It serves as a buffer zone between the twisted tunnel and the rest of the Möbiussimplex. It is subdivided into subregions where each subregion has a unique set of labels, or colours, from . (We sometimes refer to these as “colourregions”.) It is shown that any solution to an instance of Consensushalving constructed as in our reduction, represents a point in the Significant Region.

If, alternatively, a set of cuts represents a point from outside the Significant Region, then certain agents (socalled “blanketsensor agents”) will observe severe imbalances between labels and , precluding a solution.
In [29], it was relatively straightforward to integrate the subset of the 2dimensional Möbiussimplex that corresponds with the twisted tunnel, with the parts of the domain where the blanketsensor agent became active (ruling out a solution) in a way that avoided introducing solutions that fail to encode solutions of Tucker. In the present paper, that gap has to be “colouredin” in a carefullydesigned way (Section 5.3, list item 3), and this is the role of the part of the Significant Region that is not the twisted tunnel. The proofs that they work correctly (Sections 6.2, 6.3) become more complicated.
3 Snake embedding reduction
The purpose of this section is to establish the PPAcompleteness of New variant highD Tucker, Definition 8. The snake embedding construction was devised in [14], in order to prove that Nash equilibria are PPADcomplete to find when is inverse polynomial; without this trick the result is just obtained for being inverse exponential. We do a similar trick here. We will use as a startingpoint the PPAcompleteness of 2DTucker, from [1], which is the following problem:
Definition 5
(Aisenberg et al. [1]) An instance of 2DTucker consists of a labelling such that for , and . A solution to such an instance of 2DTucker is a pair of vertices , with and such that .
The hardness of the problem in Definition 5 arises when is exponentiallylarge, and the labelling function is presented by means of a boolean circuit.
We aim to prove the following is PPAcomplete, even when the values are all upperbounded by some constant (specifically, 7).
Definition 6
(Aisenberg et al. [1]) An instance of DTucker consists of a labelling such that if a point lies on the boundary of this grid (i.e., or for some ), then letting be the antipodal point of x, we have . (Two boundary points are antipodal if they lie at opposite ends of a line segment passing through the centre of the grid.) A solution consists of two points z, on this grid, having opposite labels (), each of whose coordinates differ (coordinatewise) by at most 1.
It is assumed that is presented in the form of a circuit, syntactically constrained to give opposite labels to antipodal grid points.
Definition 7
An instance of Variant highD Tucker is similar to Definition 6 but whose instances obey the following additional constraints. The are upper bounded by the constant 7. We impose the further constraint that the facets of the cube are coloured with labels from such that all colours are used, and opposite facets have opposite labels, and for it holds that the facet with label (resp. ) has no gridpoint on that facet with label (resp. ).
Theorem 3.1
Variant highD Tucker is PPAcomplete.
Informal description of snake embedding
A snakeembedding consists of a reduction from DTucker to DTucker, which we describe informally as follows. See Figure 1. Let be an instance of DTucker, on the grid . Embed in dimensional space, so that it lies in the grid . Then sandwich between two layers, where all points in the top layer get labelled , and points in the bottom layer get labelled , as in the left part of Figure 1. We now have points in the grid , and notice that this construction preserves the required property that points on the boundary have labels opposite to their antipodal points.
Then, the main idea of the snake embedding is the following. We fold this grid
into three layers, by analogy with folding a sheet of
paper along two parallel lines so that the crosssection is a zigzag line,
and one dimension of the folded paper is onethird of the unfolded version,
the other dimension being unchanged (see the right hand side of Figure 1).
In higher dimension, suppose that is the largest value of any .
Then, we can reduce by a factor of about 3, while causing the final coordinate
to go up from 3 to 9.
By merging layers of label and , the thickness of 9 reduces to 7.
This operation preserves the labelling rule for antipodal boundary points.
However, there are two points that need extra care for the reduction to go through:

Firstly, simply folding the layers such that their crosssections are zigzag lines may introduce diagonal adjacencies between cubelets that were not present in the original instance in dimensions, i.e. we might end up generating adjacent cubelets with equalandopposite colours, see the left part of Figure 2 for an illustration. To remedy this, we will “copy” (or “duplicate”) the cubelets at the folding points, essentially having three cubelets of the same colour, whose crosssections are the short vertical section in the right hand side of Figure 1, see also the right hand side of Figure 2 for an illustration. From now on, when referring to “folding”, we will mean the version where we also duplicate the cubelets at the folding points, as described above.

Secondly, the folding and duplicating operation only works if is a multiple of , as otherwise the dimensional instance may not satisfy the boundary conditions of Definition 6, i.e. we might end up with antipodal cubelets that do not have equalandopposite colours. To ensure that is a multiple of before folding, we can add or additional layers of cubelets to , (depending on whether the remainder of the division of by is either or respectively). These layers are duplicate copies of the outer layers of cubelets at opposite ends of the length direction; if there is only one additional layer to be added, we can add on either side. Note that this operation does not generate any cubelets of equalandopposite labels that were not there before and the same will be true for the instance after the folding operation. See Figure 3 for an illustration.
Formal description of snake embedding
Let be an instance of DTucker having coordinates in ranges
and label function .
Select the largest , breaking ties lexicographically.
Assume for simplicity in what follows that is largest.
Fixing the length to a multiple of . Let and let . Consider the instance of Tucker having coordinates in ranges , with , constructed from as follows. For any point in , is mapped to a point in and receives a colour such that,

if , then is mapped to and , i.e. is mapped to itself and receives its own label, since is already a multiple of .

If , then

if , is mapped to and .

if , is mapped to and .
In other words, points for which the first coordinate ranges from to , are mapped to themselves and receive their own label, and points for which the first coordinate is are mapped to the points where the first coordinate is , receiving the label of that point. This essentially “duplicates” the layer of cubelets on the right endpoint of the direction. See Figure 3 for an illustration.


If , then

if , is mapped to and .

if , is mapped to and . This is similar to the mapping and labelling in the previous case, except for the fact that we need to “shift” the labels of the points, since we essentially introduced a copy of the layer of cubelets on the left endpoint of the direction. See Figure 3 for an illustration.

Note that by the operation of adding layers as above, we do not introduce any cubelets with equalandopposite labels that were not present before. To avoid complicating the notation, in the following we will use to denote the maximum size of the first coordinate (instead of ) and we will assume that is a multiple of . We will also use to denote the instance of Tucker where is a multiple of , instead of as denoted above.
From to dimensions. Starting from an instance of Tucker, we will construct an instance of Tucker as follows. Let be a point in with labelling function . We will associate each such point with a corresponding point in and a label as follows.

If , then is mapped to , and .

If (the first “folding” point), then is mapped to the following three points in and receives the following colours (see the shaded cubelets at the righthand side of Figure 2):

(the original cubelet) and .

(the first copy) and .

(the second copy) and .


If , then x is mapped to , with .

If (the second “folding” point), then is mapped to the following three points in and receives the following colours:

(the original cubelet) and .

(the first copy) and .

(the second copy) and .


If , then is mapped to , with .

Set , along with any point x connected to it via a path of points that have not been labelled by the above procedure.

Set , along with any point x connected to it via a path of points that have not been labelled by the above procedure.
We are now ready to prove Theorem 3.1.
First, it is not hard to check that the the composition of snakeembeddings is a polynomialtime reduction. Also note that, by the way the highdimensional instances is constructed, we have not introduced any adjacencies that did not already exist, i.e. if there is a pair of adjacent cubelets with equalandopposite labels in the instance of the highdimensional version, this pair is present in the instance of the version as well, and it is easy to recover it in polynomial time. Therefore, it suffices to show how to obey the additional constraint of Variant highD Tucker, namely that for , a side having label has no gridpoints with label , and similarly for .
We begin as in [29] (see Figure 1 in that paper), by taking the original instance , of size , and extend to an instance of size as follows. The original instance is embedded in the centre of the new instance. Each region to the sides (of size ) are labelled by copying the edge of facing , along an adjacent edge of , and connecting these two edges with paths that have two straight sections and connect 2 points of the same label, and points along that path have that label. The outermost path then labels a side of the new instance of length , so these two opposite sides get opposite labels. We may assume (by switching ’s and ’s if needed) that these new opposite sides are labelled .
The fold approach shown in Figure 1 (in this paper) can be checked to retain this property. When we sandwich a cuboid between two layers of opposite (new) colours (call them and ), as shown in Figure 1, we label the new facets thus formed with and respectively. We label the other facets with their original labels (each of these facets has acquired the labels and , and no other labels). The folding operation has a natural correspondence between the facets of the unfolded and folded versions of the cuboid. It can be checked that the set of colours of a facet before folding is the same as the set of colours of the corresponding facets after folding.
It is convenient to define the following problem, whose PPAcompleteness follows fairly directly from the PPAcompleteness of Variant highD Tucker.
Definition 8
An instance of New variant highD Tucker in dimensions is presented by a boolean circuit that takes as input coordinates of a point in the hypercube and outputs a label in (assume has output gates, one for each label, and is syntactically constrained such that exactly one output gate will evaluate to true), having the following constraints that may be enforced syntactically.

Dividing into cubelets of edge length using axisaligned hyperplanes, all points in the same cubelet get the same label by ;

Interiors of antipodal boundary cubelets get opposite labels;

Points on the boundary of two or more cubelets get a label belonging to one of the adjacent cubelets;

Facets of are coloured with labels from such that all colours are used, and opposite facets have opposite labels. For it also holds that the facet with label (resp. ) does not intersect any cubelet having label (resp. ). Facets coloured are unrestricted (we call them the “panchromatic facets”).
A solution consists of a polynomial number of points that all lie within an inverse polynomial distance of each other (for concreteness, assume ). At least two of those points should receive equal and opposite labels by .
New variant highD Tucker corresponds to the problem Variant Tucker in [29]; in that paper a solution only contained 100 points, while here we use points. Here we need more points since we are in dimensions, and our analysis needs to tolerate points receiving unreliable labels.
4 Some buildingblocks and definitions
Here we set up some of the general structure of instances of Consensushalving constructed in our reduction. We identify some basic properties of solutions to these instances. We define the Möbiussimplex and the manner in which a solution encodes a point on the Möbiussimplex. The encoding of the circuitry is covered in Section 5.
Useful quantities:
We use the following values throughout the paper.

is an inversepolynomial quantity in , chosen to be substantially smaller than any other inversepolynomial quantity that we use in the reduction, apart from (below).

is an inversepolynomial quantity in , which is smaller than any other inversepolynomial quantity apart from and is larger than by an inversepolynomial amount. The quantity denotes the width of the socalled “twisted tunnel” (see Definition 23).

denotes a large polynomial in ; specifically we let . The quantity represents the number of sensor agents for each circuit encoder (see Definition 13).

denotes a large polynomial in , which is however smaller than by a polynomial factor. The quantity will be used in the definition of the “blanketsensor agents” (see Definition 14) and will quantify the extent to which the cuts in the “coordinateencoding region” (Definition 9) are allowed to differ from being evenly spaced, before the blanketsensor agents become active (see Section 4). The choice of controls the value of the radius of the Significant Region (see Proposition 4.4), with larger meaning larger .

is the precision parameter in the ConsensusHalving solution, i.e. each agent is satisfied with a partition as long as . Henceforth, we will set .
4.1 Basic buildingblocks
We consider instances of Consensushalving that have been derived from instances of New variant highD Tucker in dimensions. The general aim is to get any solution of such an instance to encode a point in dimensions that “localises” a solution to , by which we mean that from the solution of , we will be able to find a point on the instance that can be transformed to a solution of in polynomial time and fairly straightforwardly.
Definition 9
Coordinateencoding region (ce region) Given an instance of Variant highD Tucker in dimensions, the corresponding instance of Consensushalving has a coordinateencoding region, the interval , a (prefix) subinterval of .
The valuation functions of agents in an instance of Consensushalving obtained by our reduction from an instance of New variant highD Tucker in dimensions, will be designed in such a way that either or cuts (typically ) must occur in the coordinateencoding region, in any solution. Furthermore, the distance between consecutive cuts must be close to 1 (an additive difference from 1 that is upperbounded by an inverse polynomial), shown in Proposition 4.4.
Definition 10
Coordinateencoding agents (ce agents). Given an instance of New variant highD Tucker in dimensions, the corresponding instance of Consensushalving has coordinateencoding agents denoted .
The ce agents have associated coordinateencoding cuts (Definition 11). It will be seen that the ce cuts typically occur in the ce region. The ce agents do not have any value for the coordinateencoding region; their value functions are only ever positive elsewhere. In particular, they have blocks of value whose labels are affected by the output gates of the circuitry that is encoded to the right of the ce region.
Definition 11
Coordinateencoding cuts (ce cuts). We identify cuts as the coordinateencoding cuts. In the instances of Consensushalving that we construct, in any (sufficiently good approximate) solution to the Consensushalving instance, all other cuts will be constrained to lie outside the ce region (and it will be straightforward to see that the value functions of their associated agents impose this constraint). A ce cut is not straightforwardly constrained to lie in the ce region, but it will ultimately be proved that in any approximate solution, the ce cuts do in fact lie in the ce region.
Recall that from Section 2, which implies that the ce region can be divided into intervals of length (see also Figure 4).
Definition 12
shifted version. Given a value function (or measure) on the ce region , we say that another function on the ce region is a shifted version of , when we have that .
Recall that the circuitencoding region (details in Section 5) contains circuitencoders, mentioned in the following definitions.
Definition 13
Sensor agents. Each circuitencoder , , has a set of sensor agents. where the are defined as follows. When , has value uniformly distributed over the interval
For , is a shifted version of .
Each sensor agent also has valuation outside the ce region, in nonoverlapping intervals of the circuitencoding region (see Section 5.1). This valuation consists of two valuation blocks of value each, with no other valuation block in between. These are exactly as described in [29], see also Appendix A and Figure 16 for an illustration.
This value gadget for causes the th input gate in the circuitencoder to be set according to the label received by ’s block of value in the ce region, i.e. jump to the left or to the right in order to indicate that the corresponding valueblock of in the ce region is labelled or .
According to the definitions above, has a sequence of (a large polynomial number of) sensor agents that
have blocks of value in a sequence of small intervals going from left to right of
the ce region (see Figure 4).
For , has a similar sequence, shifted slightly to the right on the ce region (by ).
For , the intervals defined by the valueblocks of the sensor agents (for ) partition the interval .
Remark: Note that a ce cut may divide one of the above valueblocks held by a sensor agent in the ce region, and in that case the input being supplied (using the gadget of [29]) to its circuitencoder is unreliable. However, only sensor agents may be affected in that way, and their circuitencoders will get “outvoted” by the ones that receive valid boolean inputs. This is part of the reason why we use circuitencoders in total. More details on this averaging argument are provided in Section 5.
Definition 14
Blanketsensor agents. Each circuitencoder shall have blanketsensor agents .

In , for each , blanketsensor agent has value distributed over (see Figure 4). This value consists of a sequence
of valueblocks, each of length and of value .

In each , , and for each , the value function of that lies in the ce region is an shifted version of .

The remaining value of each consists of 3 valueblocks of width lying in a subinterval of the circuitencoding region (see Section 5.1), such that:

the valueblocks have values
respectively, where .

contains also valueblocks of agents for each gate that takes the value of as input (the feedback gadgetry, see Section 5.1.2).

The value of the blanketsensor agents in is very similar to the gadget used in [29], see Appendix A (of the present paper) and Figure 17. The structure of the blanketsensor agents in the ce region is shown in Figure 4.
Notes on the blanketsensors
Each blanketsensor agent has an associated cut that lies in the subinterval . Agent “monitors” an interval of length 2, namely the interval within which the sequence of valueblocks lie. If, in this interval, the number of these valueblocks labelled exceeds the number labelled by at least (recall that is a large polynomial which is however polynomially smaller than ) then (in any approximate solution to , where, recall, ), the cut in lies in either the righthand or the lefthand valueblock, otherwise it lies in the central valueblock. Note that these three possible positions may be converted to boolean values that influence circuitencoder ; this was referred to as a “bitdetection gadget” in [29], see Appendix A for more details.
Definition 15 (Active blanketsensor)
We say that blanketsensor is active if in fact observes a sufficiently large label discrepancy in the ce region, that lies in one of the two outer positions, left or right, and not in the central position. We say that is active towards if is the overrepresented label, with similar terminology for .
When blanketsensor agent is active, it provides input to that causes to label the value held by and controlled by , to be either or ; the choice depends on the overrepresented label in and the parity of the index of the blanketsensor agent. The precise feedback mechanism to the ce agent by the blanketsensor is described in Section 5.1.3.
When no blanketsensors are active, the sequence of ce cuts encodes a point in the Significant Region (Definition 18).
In [29], we worked just in two dimensions and there was just one blanketsensor agent for the entire ce region, for each circuitencoder. Note also that there, the blanketsensor agent had a single valueblock of length 2; here we split it into a polynomial sequence of small valueblocks. The advantage of using a polynomial sequence of valueblocks (which could not have been done in [29] due to the exponential precision requirement) is that we can argue that in all but at most circuitencoders, the blanketsensor agents have valueblocks that are not cut by the ce cuts, so we can be precise about how big a disparity between blocks labelled and cause a blanketsensor to be active, and for at most circuitencoders, we regard them as having unreliable inputs (see Definition 16 and Observation 4.2).
4.2 Features of solutions
The main result of this section is Proposition 4.4, that in a solution to approximate Consensushalving as constructed here, the sequence of cuts in the ce region are “evenly spaced” in the sense that the gap between consecutive cuts differs from 1 by at most an inversepolynomial.
Observation 4.1 (At most cuts in the ce region)
Given an instance derived by our reduction from an instance of New variant highD Tucker in dimensions, any inversepolynomial approximate solution of has the property that at most cuts lie in the coordinateencoding region. This is because all other cuts are associated with agents who have at least of their value strictly to the right of the ce region, thus in a solution, those cuts cannot lie in the ce region.
Definition 16 (Reliable input)
We will say that a circuitencoder receives reliable input, if no coordinateencoding cut passes through valueblocks of its sensor agents.
Observation 4.2
At most circuitencoders fail to receive reliable input (by Observation 4.1 and the fact that sensors of distinct circuitencoders have value in distinct intervals).
When a circuitencoder receives reliable input, it is straightforward to interpret the labels allocated to its sensors, as boolean values, and simulate a circuit computation on those values, ultimately passing feedback to the ce agents via valueblocks that get labelled according to the output gates of the circuit being simulated. This is done in a conceptually similar way to that described in [29] (e.g. see Sections 4.4.2 and 4.6 in [29]), see also Appendix A of the present paper.
Definition 17
The Möbiussimplex. The Möbiussimplex in dimensions consists of points in whose coordinates are nonnegative and sum to 1. We identify every point with the point , for all nonnegative summing to 1. We use the following metric on the Möbiussimplex, letting be the standard distance on vectors:
(1) 
where .
How a consensushalving solution encodes a point in the Möbiussimplex
Let be an instance of Consensushalving, obtained by reduction from New variant highD Tucker in dimensions, hence having ce region . Note that, by Observation 4.1, at most cuts may lie in the ce region. A set of cuts of the coordinateencoding region splits it into pieces. We associate such a split with a point in as follows. The first coordinate is the distance from the LHS of the consensushalving domain to the first cut, divided by , the length of the ce region. For , the th coordinate of is the distance between the st and th cuts, divided by . Remaining coordinates are 0.
If there are cuts in the ce region, suppose we add a cut at either the LHS or the RHS. These two alternative choices correspond to a pair of points that have been identified as the same point, as described in Definition 17. (Observation 5.3 makes a similar point regarding transformed coordinates.)
Observation 4.3
Each circuitencoder reads in “input” representing a point in the Möbiussimplex. Any circuitencoder () behaves like on a point , for which (for all ) (recall is defined in (1)). Consequently their collective output (the split between and of the value held by the ce agents) is the output of a single circuitencoder averaged over a collection of points in the Möbiussimplex, all within of each other.
This follows by inspection of the way the circuitencoders differ from each other: their sensoragents are shifted but their internal circuitry is the same.
Definition 18
The Significant Region of the Möbiussimplex . The Significant Region of consists of all points in where no blanketsensors are active (where “blanketsensors” and “active” are defined in Definition 15).
Proposition 4.4
There is an inversepolynomial value such that all points in the Significant Region have coordinates that for differ from by at most , if is encoded by the ce cuts of an approximate solution to one of our instances of Consensushalving. (Recall that ).
Thus, if an instance of Consensushalving (obtained using our reduction) has a solution , then all the ce cuts in have the property that the distance between two consecutive ce cuts differs from 1 by at most some inversepolynomial amount.
Before we proceed with the proof of the proposition, we will state a few simple lemmas that will be used throughout the proof. We start with the following definition.
Definition 19 (Cut close to integer point)
For , we will say that a cut is close to integer point , if it lies in . We will say that cut is close to an integer point if there is some integer such that is close to integer point .
Intuitively, cuts that are close to integer points lie close (within distance at most ) to either the endpoints or the midpoint of some monitored interval .
Note that, by Definition 14, a blanketsensor agent will be active when at least valueblocks of volume in an interval monitored by the blanketsensor agent receive the same label. This will happen if there is a union of subintervals of some monitored subinterval , for some , which will have total length larger than , where . This means that in , there will be at least valueblocks of volume that receive the same label, since by construction, there are at least such valueblocks in any interval of length at least . In such a case, the blanketsensor agent is active and the set of cuts is not a solution to . In the following, we will consider such that .
Definition 20 (Monochromatic interval of label )
An interval is a monochromatic interval if it is not intersected by any cuts (which means that it receives a single label). Additionally, if for , is labelled with , then we will say that is a monochromatic interval of label .
It should be clear that if any monitored interval has a large enough (larger than ) monochromatic subinterval, then the blanketsensor agent is active.
Lemma 4.5
For some ,with , consider the interval of length and assume that there are at most cuts in this interval. Then at least one of the blanketsensors monitoring the subintervals in will be active.
In , there are at least intervals monitored by blanketsensor agents and we only have at most cuts at our disposal. With cuts, we can partition an interval of length in at most intervals, the largest of which, call it , will have length at least . Since , the length of is actually larger than . The lemma follows then from the fact that, since the monitored intervals partition the interval , will contain a monochromatic interval of length at least , which will be entirely contained in some monitored interval, and the corresponding blanketsensor agent will be active.
Lemma 4.6
For some , with , consider the interval of length and assume that there are cuts in this interval. Then either

each of the cuts in will be close to a different integer point and these integer points will be the midpoints of the monitored subintervals contained entirely in
Comments
There are no comments yet.