The complexity classes PPA and PPAD were introduced in a seminal paper of Papadimitriou 
in 1994, in an attempt to classify several natural problems in the class TFNP. TFNP is the class of total search problems in NP for which a solution exists for every instance, and solutions can be efficiently verified. Various important problems were subsequently proven to be complete for the class PPAD, such as the complexity of many versions of Nash equilibrium [19, 14, 26, 59, 63, 15], market equilibrium computation [18, 12, 71, 16, 65], and others [24, 43]. As evidence of computational hardness, PPA-completeness is stronger than PPAD-completeness, i.e., . Indeed, Jeřábek  shows that it indicates cryptographic hardness in a strong sense:  gives a randomised reduction from FACTORING to PPA-complete problems. This is not known for PPAD-complete problems. For more details, and the significance of PPA-completeness, we refer the reader to the related discussion in . PPA is the class of problems reducible to Leaf (Definition 1), and a PPA-complete problem is polynomial-time equivalent to Leaf.
An instance of the problem Leaf consists of an undirected graph whose vertices have degree at most 2; has vertices represented by bitstrings of length ; is presented concisely via a circuit that takes as input a vertex and outputs its neighbour(s). We stipulate that vertex has degree 1. The challenge is to find some other vertex having degree 1.
Complete problems for the class PPA seemed to be much more elusive than PPAD-complete ones, especially when one is interested in “natural” problems, where “natural” here has the very specific meaning of problems that do not explicitly contain a circuit in their definition. Besides Papadimitriou , other papers asking about the possible existence of natural PPA-complete problems include [36, 13, 19, 22]. In a recent precursor  to the present paper we identified the first example of such a problem, namely the approximate Consensus-halving problem, dispelling the suspicion that such problems might not exist. In this paper we build on that result and settle the complexity of two natural and important problems whose complexity status were raised explicitly as open problems in Papadimitriou’s paper itself, and in many other papers beginning in the 1980s. Specifically, we prove that Necklace-splitting (with two thieves, see Definition 2) and Discrete Ham Sandwich are both PPA-complete.
Definition 2 (Necklace Splitting)
In the -Necklace-splitting problem there is an open necklace with beads of colour , for . An “open necklace” means that the beads form a string, not a cycle. The task is to cut the necklace in places and partition the resulting substrings into collections, each containing precisely beads of colour , .
In Definition 2, is thought of as the number of thieves who desire to split the necklace in such a way that the beads of each colour are equally shared. In this paper, usually we have and we refer to this special case as Necklace-splitting.
Definition 3 (Discrete Ham Sandwich)
In the Discrete Ham Sandwich problem, there are sets of points in dimensions
having integer coordinates (equivalently one could use rationals).
A solution consists of a hyperplane that splits each set of points into
subsets of equal size (if any points lie on the plane, we are allowed to place
them on either side, or even split them arbitrarily).
dimensions having integer coordinates (equivalently one could use rationals). A solution consists of a hyperplane that splits each set of points into subsets of equal size (if any points lie on the plane, we are allowed to place them on either side, or even split them arbitrarily).
In Definition 3, each point set represents an ingredient of the sandwich, which is to be cut by a hyperplane in such a way that all ingredients are equally split.
The necklace-splitting problem was introduced in a 1982 paper of Bhatt and Leiserson (, Section 5), where it arose in the context of VLSI circuit design (the version defined in  is the 2-thief case proved PPA-complete in the present paper). In 1985 and 1986, the 2-thief case was shown to have guaranteed solutions (as defined in Definition 2) by Goldberg and West  and Alon and West  and then in 1987, Alon  proved existence of solutions for thieves as well. Early papers that explicitly raise its complexity-theoretic status as an open problem are Goldberg and West  and Alon [3, 4]. Subsequently, the necklace-splitting problem was found to be closely related to “paint-shop scheduling”, a line of work in which several papers such as [53, 55, 54] explicitly mention the question of the computational complexity of necklace-splitting. Meunier  notes that the search for a minimum number of cuts admitting a fair division (which may be smaller than the number that is guaranteed to suffice) is NP-hard, even for a subclass of instances of the 2-thief case. (That is a result of Bonsma et al. , for the “paint shop problem with words”, equivalent to 2-thief Necklace-splitting with 2 beads of each colour.)
In , we showed Necklace-splitting to be computationally equivalent to -Consensus-halving for inverse-polynomial precision parameter , but the PPA-completeness of -Consensus-halving was only shown for inverse-exponential .  established PPAD-hardness of Necklace-splitting, applying the main result of . In this paper, we prove that -Consensus-halving is PPA-complete for inversely polynomial, thus obtaining the desired PPA-completeness of Necklace-splitting. While some structural parts of our reduction are extensions of those presented in , obtaining the result for inverse-polynomial precision is much more challenging, as the construction needs to move to a high-dimensional space (rather than the two-dimensional space which is sufficient for the result in ). We highlight the main new techniques that we have developed in this paper in Section 2.1, where we provide an overview of the reduction. Our PPA-completeness result gives a convincing negative answer to Meunier and Neveu’s questions  about possible polynomial-time solvability or membership of PPAD for Necklace-splitting; likewise it runs counter to Alon’s cautious optimism at ICM 1990 (, Section 4) that the problem may be solvable in polynomial time.
The Ham Sandwich Theorem  is of enduring and widespread interest due to its colourful and intuitive statement, and its relevance and applications in topology, social choice theory, and computational geometry. Roughly, it states that given measures in Euclidean -space, there exists a hyperplane that cuts them all simultaneously in half. Early work on variants and applications of the theorem focused on non-constructive existence proofs and mostly did not touch on the algorithmics. A 1983 paper by Hill  hints at possible interest in the corresponding computational challenge, in the context of a related land division problem. The computational problem (and its complexity) was first properly studied in a line of work in computational geometry beginning in the 1980s, for example [25, 46, 47, 50]. The problem envisages input data consisting of sets of points in Euclidean -space, and asks for a hyperplane that splits all point sets in half. (The problem Discrete Ham Sandwich (Definition 3) as named in  is essentially this, with set equal to to emphasise that we care about the high-dimensional case.) In this work in computational geometry, the emphasis has been on efficient algorithms for small values of ; Lo et al.  improve the dependence on but it is still exponential, and the present paper shows for the first time that we should not expect to improve on that exponential dependence. More recently, Grandoni et al.  apply the “Generalized Ham Sandwich Theorem” to a problem in multi-objective optimisation and note that a constructive proof would allow a more efficient algorithm to emerge. The only computational hardness result we know of is Knauer et al.  who obtain a -hardness result for a constrained version of the problem;  points out the importance of the computational complexity of the general problem. The PPA-completeness result of the present paper is the first hardness result of any kind for Discrete Ham Sandwich, and as we noted, is a strong notion of computational hardness. Karpic and Saha  showing a form of equivalence between the Ham Sandwich Theorem and Borsuk-Ulam, explicitly mention the possible PPA-completeness of Discrete Ham Sandwich as an “interesting and challenging open problem”.
We prove the PPA-completeness of Discrete Ham Sandwich via a simple reduction from Necklace-splitting.
Ours is not the first paper to develop the close relationship between the two problems:
Blagojević and Soberón  shows a generalisation,
where multiple agents may share a “sandwich”, dividing it into convex pieces.
Further papers to explicitly point out their computational complexity as open problems include
Deng et al.  (mentioning that both problems “show promise to be complete for PPA”),
Aisenberg et al. , and Belovs et al. .
Further Related Work: The class TFNP was defined in  and several of its subclasses were studied over the years, such as PPA, PPAD and PPP , PLS  and CLS ; here we focus on the most recent results. As we mentioned earlier, in  we identified the first natural complete problem for PPA, the approximate Consensus-halving problem. In a recent paper, Sotiraki et al.  identified the first natural problem for the class PPP, the class of problems whose totality is established by an argument based on the pigeonhole principle. For the class CLS, both Daskalakis et al.  and Fearnley et al.  identified complete problems (two versions of the Contraction Map problem, where a metric or a meta-metric are given as part of the input). In the latter paper, the authors define a new class, namely EOPL (for “End of Potential Line”), and show that it is a subclass of CLS. Furthermore, they show that two well-known problems in CLS, the P-Matrix Linear Complementarity Problem (P-LCP), and finding a fixpoint of a piecewise-linear contraction map (PL-Contraction) belong to the class. The End of Potential Line problem of  is closely related to the End of Metered Line of .
2 Problems and Results
An instance incorporates, for , a non-negative measure of a finite line interval , where each integrates to 1 and is part of the input. We assume that are step functions represented in a standard way, in terms of the endpoints of intervals where is constant, and the value taken in each such interval. We use the bit model (logarithmic cost model) of numbers. specifies a value also using the bit model. We regard as the value function held by agent for subintervals of .
A solution consists firstly of a set of cut points in (also given in the bit model of numbers). These points partition into (at most) subintervals, and the second element of a solution is that each subinterval is labelled or . This labelling is a correct solution provided that for each , , i.e. each agent has a value in the range for the subintervals labelled (hence also values the subintervals labelled in that range).
We assume without loss of generality that in a valid solution, labels and alternate. We also assume that the alternating label sequence begins with label on the left-hand side of (i.e. denotes the leftmost label in a Consensus-halving solution).
-Consensus-halving is PPA-complete for some inverse-polynomial .
As mentioned in the introduction, in  it was proven that 2-thief Necklace-splitting and -Consensus-halving for inversely-polynomial are computationally equivalent, i.e. they reduce to each other in polynomial time. Therefore, by  and the “in PPA” result proven in Section B.1, we immediately get the following corollary.
Necklace-splitting is PPA-complete when there are thieves.
If we knew that -Necklace-splitting belonged to PPA for other values of , we could of course make the blanket statement “Necklace-splitting is PPA-complete”. Alas, the proofs that Necklace-splitting is a total search problem for [2, 56] do not seem to boil down to the parity argument on an undirected graph! That being said, we do manage to establish membership of PPA for being a power of 2 (essentially an insight of ), see Section B of the Appendix for the details and a related discussion. Of course, the result strongly suggests that -Necklace-splitting is a hard problem for other values of .
As it happens, the PPA-completeness of Discrete Ham Sandwich follows straightforwardly, and we present that next. The basic idea of Theorem 2.3
of embedding the necklace in the moment curve appears already in[62, 51] and , p.48.
Discrete Ham Sandwich is PPA-complete.
The idea is to embed the necklace into the moment curve . Assume all beads lie in the unit interval . A bead having colour located at becomes a point mass of ingredient of the ham sandwich located at . It is known that any hyperplane intersects the moment curve in at most points, (e.g. see , Lemma 5.4.2), therefore a solution to Discrete Ham Sandwich corresponds directly to a solution to Necklace-splitting, where the two thieves splitting the necklace take alternating pieces. (In the case, we may assume without loss of generality that they do in fact take alternating pieces).
A limitation to Theorem 2.3 is that the coordinates may be exponentially large numbers; they could not be written in unary. We leave it as an open problem whether a unary-coordinate version is also PPA-complete. As defined in , Discrete Ham Sandwich stipulated that each of the sets of points is of size , whereas Definition 3 allows polynomial-sized sets. We can straightforwardly extend PPA-completeness to the version of  by adding “dummy dimensions” whose purpose is to allow larger sets of each ingredient; the new ingredients that are introduced, consist of compact clusters of point masses, each cluster in general position relative to the other clusters and the subspace of dimension that contains the points of interest.
We use the standard notation to denote the set , and we also use to denote . We often refer to elements of as “labels” or “colours”. is usually used to denote a labelling function (so its codomain is ).
We let denote the domain of an instance of Consensus-halving; if that instance has complexity then will be the interval , where is some number bounded by a polynomial in . Recall by Definition 4 that denotes the value function, or measure, of agent on the domain , in a Consensus-halving instance. We also associate each agent with its own cut (recall that the number of agents and cuts is supposed to be equal) and we let denote the cut associated with agent .
We let be a polynomial that represents the number of “circuit-encoders” that we use in our reduction (see Section 5.1); we usually denote it , dropping the from the notation.
Finally, denotes the -cube (or “box”) .
In an instance of Consensus-halving, a value-block of an agent denotes a sub-interval of the domain where
possesses positive value, uniformly distributed on that interval. In our construction, value-blocks tend to be scattered rather sparsely along the domain.
2.1 Overview of the proof
 established PPAD-hardness of Consensus-halving. An arithmetic circuit can be encoded by an instance to -Consensus-halving, by letting each gate have a cut whose location (in a solution) represents the (approximate) value taken at that gate. Agents’ valuation functions ensure that values taken at the gates behave according to the type of gate. A “PPAD circuit” can then be represented using an instance of Consensus-halving.
 noted that the search space of solutions to instances as constructed by , is oriented. A radical new idea was needed to encode the non-oriented feature of topological spaces representable by PPA. That was achieved by using two cuts to represent the coordinates of a point on a triangular region faving two sides identified to form a Möbius strip. (These cuts are the only ones that lie in a specific subinterval of the interval of a Consensus-halving instance, called the “coordinate-encoding (c-e) region”. The two cuts are called the “coordinate-encoding cuts”.) Identifying two sides in this way is done by exploiting the equivalence of taking a cut on the LHS of the c-e region, and moving it to the RHS. In order to embed a hard search problem into the surface of a standard 2-dimensional Möbius strip, it was necessary to work at exponentially-fine resolution, which immediately required inverse-exponential for instances of -Consensus-halving. In  we reduced from the PPA-complete problem 2D-Tucker  (Definition 5 below), a search problem on an exponential-sized 2-dimensional grid.
In , the rest of is called the “circuit-encoding region” , and the cuts occurring within do the job of performing computation on the location of cuts in the c-e region. The present paper retains this high-level structure (Section 4.1). As in  we use multiple copies of the circuit that performs the computation, each in its own subregion of . Here we use copies where is a polynomial; in  we used 100 copies. Each copy is called a circuit-encoder. The purpose of multiple copies is to make the system robust; a small fraction of copies may be unreliable: as in  we have to account for the possibility that one of the c-e cuts may occur in the circuit-encoding region, rendering one of the copies unreliable. We re-use the “double negative lemma” of  that such a cut is not too harmful. We also adapt a result of  that when a cut is moved from the one end to the other end of the c-e region, this corresponds to identifying two facets of a simplex to form a Möbius strip.
 uses a sequence of “sensor agents” to identify the endpoints of intervals labelled and in the coordinate-encoding region, and feed this information into the above mentioned circuit-encoders, which perform computation on those values. As in  we use sensor agents. We obtain a simplification with respect to  which is that we do not need the gadgets used there to perform “bit-extraction” (converting the position of a c-e cut into boolean values). In , a solution to an instance of Consensus-halving was associated with a sequence of 100 points in the Möbius-simplex (referred there as the “simplex-domain”), and the “averaging manoeuvre” introduced in  was applied. In this paper, for a polynomial , we sample a sequence of points in a more elegant manner, again exploiting the inverse-polynomial precision of solutions that we care about.
2.1.2 Technical innovations
As in , we reduce from the PPA-complete problem 2D-Tucker  (Definition 5). That computational problem uses an exponentially-fine 2D grid, and (unlike ), in Section 3 we apply the snake-embedding technique invented in  (versions of which are used in [22, 23] in the context of PPA) to convert this to a grid of fixed resolution, at the expense of going from 2 to dimensions. The new problem, Variant high-D Tucker (Definition 7) envisages a grid. Here, we design the snake-embedding in such a way that PPA-completeness holds for instances of the high-dimensional problem that obey a further constraint on the way the high-dimensional grid is coloured, that we exploit subsequently. A further variant, New variant high-D Tucker (Definition 8) switches to a “dual” version where a hypercube is divided into cubelets, and points in the hypercube are coloured such that interiors of cubelets are monochromatic. A pair of points is sought having equal and opposite colours and distant by much less than the size of the cubelets.
We encode a point in dimensions using a solution to an instance of Consensus-halving as follows.
Instead of having just 2 cuts in the coordinate-encoding region (as in ), suppose we ensure
that up to cuts lie there.
These cuts split this interval into pieces whose total length is constant,
so represent a point in the unit -simplex (in , the unit 2-simplex).
This “Möbius-simplex” (Definition 17; Figure 10)
has the further property that two facets are identified with each other in a way that
effectively turns the simplex into an -dimensional Möbius strip.
the transformation and its inverse can be computed efficiently, and distances between transformed coordinate vectors are polynomially related to distances between un-transformed vectors;
at the two facets that are identified with each other, the coordinates of corresponding points are the negations of each other; our colouring function (that respects Tucker-style boundary conditions) has the effect that antipodal points get equal and opposite colours, and no undesired solutions are introduced at these facets.
This is the “smooth embedding” referred to in the abstract.
With the aid of the above coordinate transformation, we divide up the Möbius-simplex:
The twisted tunnel (Definition 23) is an inverse-polynomially thick strip, connecting the two facets that are identified in forming the Möbius-simplex. It contains at its centre an embedded copy of the hypercube domain of an instance of New variant high-D Tucker. Outside of this embedded copy, it is “coloured in” (using our new coordinate system) in a way that avoids introducing solutions that do not encode solutions of .
The Significant Region contains the twisted tunnel and constitutes a somewhat thicker strip connecting the two facets. It serves as a buffer zone between the twisted tunnel and the rest of the Möbius-simplex. It is subdivided into subregions where each subregion has a unique set of labels, or colours, from . (We sometimes refer to these as “colour-regions”.) It is shown that any solution to an instance of Consensus-halving constructed as in our reduction, represents a point in the Significant Region.
If, alternatively, a set of cuts represents a point from outside the Significant Region, then certain agents (so-called “blanket-sensor agents”) will observe severe imbalances between labels and , precluding a solution.
In , it was relatively straightforward to integrate the subset of the 2-dimensional Möbius-simplex that corresponds with the twisted tunnel, with the parts of the domain where the blanket-sensor agent became active (ruling out a solution) in a way that avoided introducing solutions that fail to encode solutions of Tucker. In the present paper, that gap has to be “coloured-in” in a carefully-designed way (Section 5.3, list item 3), and this is the role of the part of the Significant Region that is not the twisted tunnel. The proofs that they work correctly (Sections 6.2, 6.3) become more complicated.
3 Snake embedding reduction
The purpose of this section is to establish the PPA-completeness of New variant high-D Tucker, Definition 8. The snake embedding construction was devised in , in order to prove that -Nash equilibria are PPAD-complete to find when is inverse polynomial; without this trick the result is just obtained for being inverse exponential. We do a similar trick here. We will use as a starting-point the PPA-completeness of 2D-Tucker, from , which is the following problem:
(Aisenberg et al. ) An instance of 2D-Tucker consists of a labelling such that for , and . A solution to such an instance of 2D-Tucker is a pair of vertices , with and such that .
The hardness of the problem in Definition 5 arises when is exponentially-large, and the labelling function is presented by means of a boolean circuit.
We aim to prove the following is PPA-complete, even when the values are all upper-bounded by some constant (specifically, 7).
(Aisenberg et al. ) An instance of D-Tucker consists of a labelling such that if a point lies on the boundary of this grid (i.e., or for some ), then letting be the antipodal point of x, we have . (Two boundary points are antipodal if they lie at opposite ends of a line segment passing through the centre of the grid.) A solution consists of two points z, on this grid, having opposite labels (), each of whose coordinates differ (coordinate-wise) by at most 1.
It is assumed that is presented in the form of a circuit, syntactically constrained to give opposite labels to antipodal grid points.
An instance of Variant high-D Tucker is similar to Definition 6 but whose instances obey the following additional constraints. The are upper bounded by the constant 7. We impose the further constraint that the facets of the cube are coloured with labels from such that all colours are used, and opposite facets have opposite labels, and for it holds that the facet with label (resp. ) has no grid-point on that facet with label (resp. ).
Variant high-D Tucker is PPA-complete.
Informal description of snake embedding
A snake-embedding consists of a reduction from D-Tucker to D-Tucker, which we describe informally as follows. See Figure 1. Let be an instance of D-Tucker, on the grid . Embed in -dimensional space, so that it lies in the grid . Then sandwich between two layers, where all points in the top layer get labelled , and points in the bottom layer get labelled , as in the left part of Figure 1. We now have points in the grid , and notice that this construction preserves the required property that points on the boundary have labels opposite to their antipodal points.
Then, the main idea of the snake embedding is the following. We fold this grid
into three layers, by analogy with folding a sheet of
paper along two parallel lines so that the cross-section is a zigzag line,
and one dimension of the folded paper is one-third of the unfolded version,
the other dimension being unchanged (see the right hand side of Figure 1).
In higher dimension, suppose that is the largest value of any .
Then, we can reduce by a factor of about 3, while causing the final coordinate
to go up from 3 to 9.
By merging layers of label and , the thickness of 9 reduces to 7.
This operation preserves the labelling rule for antipodal boundary points.
However, there are two points that need extra care for the reduction to go through:
Firstly, simply folding the layers such that their cross-sections are zigzag lines may introduce diagonal adjacencies between cubelets that were not present in the original instance in -dimensions, i.e. we might end up generating adjacent cubelets with equal-and-opposite colours, see the left part of Figure 2 for an illustration. To remedy this, we will “copy” (or “duplicate”) the cubelets at the folding points, essentially having three cubelets of the same colour, whose cross-sections are the short vertical section in the right hand side of Figure 1, see also the right hand side of Figure 2 for an illustration. From now on, when referring to “folding”, we will mean the version where we also duplicate the cubelets at the folding points, as described above.
Secondly, the folding and duplicating operation only works if is a multiple of , as otherwise the -dimensional instance may not satisfy the boundary conditions of Definition 6, i.e. we might end up with antipodal cubelets that do not have equal-and-opposite colours. To ensure that is a multiple of before folding, we can add or additional layers of cubelets to , (depending on whether the remainder of the division of by is either or respectively). These layers are duplicate copies of the outer layers of cubelets at opposite ends of the length- direction; if there is only one additional layer to be added, we can add on either side. Note that this operation does not generate any cubelets of equal-and-opposite labels that were not there before and the same will be true for the instance after the folding operation. See Figure 3 for an illustration.
Formal description of snake embedding
Let be an instance of D-Tucker having coordinates in ranges
and label function .
Select the largest , breaking ties lexicographically.
Assume for simplicity in what follows that is largest.
Fixing the length to a multiple of . Let and let . Consider the instance of -Tucker having coordinates in ranges , with , constructed from as follows. For any point in , is mapped to a point in and receives a colour such that,
if , then is mapped to and , i.e. is mapped to itself and receives its own label, since is already a multiple of .
If , then
if , is mapped to and .
if , is mapped to and .
In other words, points for which the first coordinate ranges from to , are mapped to themselves and receive their own label, and points for which the first coordinate is are mapped to the points where the first coordinate is , receiving the label of that point. This essentially “duplicates” the layer of cubelets on the right endpoint of the -direction. See Figure 3 for an illustration.
If , then
if , is mapped to and .
if , is mapped to and . This is similar to the mapping and labelling in the previous case, except for the fact that we need to “shift” the labels of the points, since we essentially introduced a copy of the layer of cubelets on the left endpoint of the -direction. See Figure 3 for an illustration.
Note that by the operation of adding layers as above, we do not introduce any cubelets with equal-and-opposite labels that were not present before. To avoid complicating the notation, in the following we will use to denote the maximum size of the first coordinate (instead of ) and we will assume that is a multiple of . We will also use to denote the instance of -Tucker where is a multiple of , instead of as denoted above.
From to dimensions. Starting from an instance of -Tucker, we will construct an instance of Tucker as follows. Let be a point in with labelling function . We will associate each such point with a corresponding point in and a label as follows.
If , then is mapped to , and .
If (the first “folding” point), then is mapped to the following three points in and receives the following colours (see the shaded cubelets at the right-hand side of Figure 2):
(the original cubelet) and .
(the first copy) and .
(the second copy) and .
If , then x is mapped to , with .
If (the second “folding” point), then is mapped to the following three points in and receives the following colours:
(the original cubelet) and .
(the first copy) and .
(the second copy) and .
If , then is mapped to , with .
Set , along with any point x connected to it via a path of points that have not been labelled by the above procedure.
Set , along with any point x connected to it via a path of points that have not been labelled by the above procedure.
We are now ready to prove Theorem 3.1.
First, it is not hard to check that the the composition of snake-embeddings is a polynomial-time reduction. Also note that, by the way the high-dimensional instances is constructed, we have not introduced any adjacencies that did not already exist, i.e. if there is a pair of adjacent cubelets with equal-and-opposite labels in the instance of the high-dimensional version, this pair is present in the instance of the version as well, and it is easy to recover it in polynomial time. Therefore, it suffices to show how to obey the additional constraint of Variant high-D Tucker, namely that for , a side having label has no grid-points with label , and similarly for .
We begin as in  (see Figure 1 in that paper), by taking the original instance , of size , and extend to an instance of size as follows. The original instance is embedded in the centre of the new instance. Each region to the sides (of size ) are labelled by copying the edge of facing , along an adjacent edge of , and connecting these two edges with paths that have two straight sections and connect 2 points of the same label, and points along that path have that label. The outermost path then labels a side of the new instance of length , so these two opposite sides get opposite labels. We may assume (by switching ’s and ’s if needed) that these new opposite sides are labelled .
The -fold approach shown in Figure 1 (in this paper) can be checked to retain this property. When we sandwich a cuboid between two layers of opposite (new) colours (call them and ), as shown in Figure 1, we label the new facets thus formed with and respectively. We label the other facets with their original labels (each of these facets has acquired the labels and , and no other labels). The folding operation has a natural correspondence between the facets of the unfolded and folded versions of the cuboid. It can be checked that the set of colours of a facet before folding is the same as the set of colours of the corresponding facets after folding.
It is convenient to define the following problem, whose PPA-completeness follows fairly directly from the PPA-completeness of Variant high-D Tucker.
An instance of New variant high-D Tucker in dimensions is presented by a boolean circuit that takes as input coordinates of a point in the hypercube and outputs a label in (assume has output gates, one for each label, and is syntactically constrained such that exactly one output gate will evaluate to true), having the following constraints that may be enforced syntactically.
Dividing into cubelets of edge length using axis-aligned hyperplanes, all points in the same cubelet get the same label by ;
Interiors of antipodal boundary cubelets get opposite labels;
Points on the boundary of two or more cubelets get a label belonging to one of the adjacent cubelets;
Facets of are coloured with labels from such that all colours are used, and opposite facets have opposite labels. For it also holds that the facet with label (resp. ) does not intersect any cubelet having label (resp. ). Facets coloured are unrestricted (we call them the “panchromatic facets”).
A solution consists of a polynomial number of points that all lie within an inverse polynomial distance of each other (for concreteness, assume ). At least two of those points should receive equal and opposite labels by .
New variant high-D Tucker corresponds to the problem Variant Tucker in ; in that paper a solution only contained 100 points, while here we use points. Here we need more points since we are in dimensions, and our analysis needs to tolerate points receiving unreliable labels.
4 Some building-blocks and definitions
Here we set up some of the general structure of instances of Consensus-halving constructed in our reduction. We identify some basic properties of solutions to these instances. We define the Möbius-simplex and the manner in which a solution encodes a point on the Möbius-simplex. The encoding of the circuitry is covered in Section 5.
We use the following values throughout the paper.
is an inverse-polynomial quantity in , chosen to be substantially smaller than any other inverse-polynomial quantity that we use in the reduction, apart from (below).
is an inverse-polynomial quantity in , which is smaller than any other inverse-polynomial quantity apart from and is larger than by an inverse-polynomial amount. The quantity denotes the width of the so-called “twisted tunnel” (see Definition 23).
denotes a large polynomial in ; specifically we let . The quantity represents the number of sensor agents for each circuit encoder (see Definition 13).
denotes a large polynomial in , which is however smaller than by a polynomial factor. The quantity will be used in the definition of the “blanket-sensor agents” (see Definition 14) and will quantify the extent to which the cuts in the “coordinate-encoding region” (Definition 9) are allowed to differ from being evenly spaced, before the blanket-sensor agents become active (see Section 4). The choice of controls the value of the radius of the Significant Region (see Proposition 4.4), with larger meaning larger .
is the precision parameter in the Consensus-Halving solution, i.e. each agent is satisfied with a partition as long as . Henceforth, we will set .
4.1 Basic building-blocks
We consider instances of Consensus-halving that have been derived from instances of New variant high-D Tucker in dimensions. The general aim is to get any solution of such an instance to encode a point in dimensions that “localises” a solution to , by which we mean that from the solution of , we will be able to find a point on the instance that can be transformed to a solution of in polynomial time and fairly straightforwardly.
Coordinate-encoding region (c-e region) Given an instance of Variant high-D Tucker in dimensions, the corresponding instance of Consensus-halving has a coordinate-encoding region, the interval , a (prefix) subinterval of .
The valuation functions of agents in an instance of Consensus-halving obtained by our reduction from an instance of New variant high-D Tucker in dimensions, will be designed in such a way that either or cuts (typically ) must occur in the coordinate-encoding region, in any solution. Furthermore, the distance between consecutive cuts must be close to 1 (an additive difference from 1 that is upper-bounded by an inverse polynomial), shown in Proposition 4.4.
Coordinate-encoding agents (c-e agents). Given an instance of New variant high-D Tucker in dimensions, the corresponding instance of Consensus-halving has coordinate-encoding agents denoted .
The c-e agents have associated coordinate-encoding cuts (Definition 11). It will be seen that the c-e cuts typically occur in the c-e region. The c-e agents do not have any value for the coordinate-encoding region; their value functions are only ever positive elsewhere. In particular, they have blocks of value whose labels are affected by the output gates of the circuitry that is encoded to the right of the c-e region.
Coordinate-encoding cuts (c-e cuts). We identify cuts as the coordinate-encoding cuts. In the instances of Consensus-halving that we construct, in any (sufficiently good approximate) solution to the Consensus-halving instance, all other cuts will be constrained to lie outside the c-e region (and it will be straightforward to see that the value functions of their associated agents impose this constraint). A c-e cut is not straightforwardly constrained to lie in the c-e region, but it will ultimately be proved that in any approximate solution, the c-e cuts do in fact lie in the c-e region.
-shifted version. Given a value function (or measure) on the c-e region , we say that another function on the c-e region is a -shifted version of , when we have that .
Recall that the circuit-encoding region (details in Section 5) contains circuit-encoders, mentioned in the following definitions.
Sensor agents. Each circuit-encoder , , has a set of sensor agents. where the are defined as follows. When , has value uniformly distributed over the interval
For , is a -shifted version of .
Each sensor agent also has valuation outside the c-e region, in non-overlapping intervals of the circuit-encoding region (see Section 5.1). This valuation consists of two valuation blocks of value each, with no other valuation block in between. These are exactly as described in , see also Appendix A and Figure 16 for an illustration.
This value gadget for causes the -th input gate in the circuit-encoder to be set according to the label received by ’s block of value in the c-e region, i.e. jump to the left or to the right in order to indicate that the corresponding value-block of in the c-e region is labelled or .
According to the definitions above, has a sequence of (a large polynomial number of) sensor agents that
have blocks of value in a sequence of small intervals going from left to right of
the c-e region (see Figure 4).
For , has a similar sequence, shifted slightly to the right on the c-e region (by ).
For , the intervals defined by the value-blocks of the sensor agents (for ) partition the interval .
Remark: Note that a c-e cut may divide one of the above value-blocks held by a sensor agent in the c-e region, and in that case the input being supplied (using the gadget of ) to its circuit-encoder is unreliable. However, only sensor agents may be affected in that way, and their circuit-encoders will get “out-voted” by the ones that receive valid boolean inputs. This is part of the reason why we use circuit-encoders in total. More details on this averaging argument are provided in Section 5.
Blanket-sensor agents. Each circuit-encoder shall have blanket-sensor agents .
In , for each , blanket-sensor agent has value distributed over (see Figure 4). This value consists of a sequence
of value-blocks, each of length and of value .
In each , , and for each , the value function of that lies in the c-e region is an -shifted version of .
The remaining value of each consists of 3 value-blocks of width lying in a subinterval of the circuit-encoding region (see Section 5.1), such that:
the value-blocks have values
respectively, where .
contains also value-blocks of agents for each gate that takes the value of as input (the feedback gadgetry, see Section 5.1.2).
The value of the blanket-sensor agents in is very similar to the gadget used in , see Appendix A (of the present paper) and Figure 17. The structure of the blanket-sensor agents in the c-e region is shown in Figure 4.
Notes on the blanket-sensors
Each blanket-sensor agent has an associated cut that lies in the subinterval . Agent “monitors” an interval of length 2, namely the interval within which the sequence of value-blocks lie. If, in this interval, the number of these value-blocks labelled exceeds the number labelled by at least (recall that is a large polynomial which is however polynomially smaller than ) then (in any -approximate solution to , where, recall, ), the cut in lies in either the right-hand or the left-hand value-block, otherwise it lies in the central value-block. Note that these three possible positions may be converted to boolean values that influence circuit-encoder ; this was referred to as a “bit-detection gadget” in , see Appendix A for more details.
Definition 15 (Active blanket-sensor)
We say that blanket-sensor is active if in fact observes a sufficiently large label discrepancy in the c-e region, that lies in one of the two outer positions, left or right, and not in the central position. We say that is active towards if is the overrepresented label, with similar terminology for .
When blanket-sensor agent is active, it provides input to that causes to label the value held by and controlled by , to be either or ; the choice depends on the over-represented label in and the parity of the index of the blanket-sensor agent. The precise feedback mechanism to the c-e agent by the blanket-sensor is described in Section 5.1.3.
When no blanket-sensors are active, the sequence of c-e cuts encodes a point in the Significant Region (Definition 18).
In , we worked just in two dimensions and there was just one blanket-sensor agent for the entire c-e region, for each circuit-encoder. Note also that there, the blanket-sensor agent had a single value-block of length 2; here we split it into a polynomial sequence of small value-blocks. The advantage of using a polynomial sequence of value-blocks (which could not have been done in  due to the exponential precision requirement) is that we can argue that in all but at most circuit-encoders, the blanket-sensor agents have value-blocks that are not cut by the c-e cuts, so we can be precise about how big a disparity between blocks labelled and cause a blanket-sensor to be active, and for at most circuit-encoders, we regard them as having unreliable inputs (see Definition 16 and Observation 4.2).
4.2 Features of solutions
The main result of this section is Proposition 4.4, that in a solution to approximate Consensus-halving as constructed here, the sequence of cuts in the c-e region are “evenly spaced” in the sense that the gap between consecutive cuts differs from 1 by at most an inverse-polynomial.
Observation 4.1 (At most cuts in the c-e region)
Given an instance derived by our reduction from an instance of New variant high-D Tucker in dimensions, any inverse-polynomial approximate solution of has the property that at most cuts lie in the coordinate-encoding region. This is because all other cuts are associated with agents who have at least of their value strictly to the right of the c-e region, thus in a solution, those cuts cannot lie in the c-e region.
Definition 16 (Reliable input)
We will say that a circuit-encoder receives reliable input, if no coordinate-encoding cut passes through value-blocks of its sensor agents.
At most circuit-encoders fail to receive reliable input (by Observation 4.1 and the fact that sensors of distinct circuit-encoders have value in distinct intervals).
When a circuit-encoder receives reliable input, it is straightforward to interpret the labels allocated to its sensors, as boolean values, and simulate a circuit computation on those values, ultimately passing feedback to the c-e agents via value-blocks that get labelled according to the output gates of the circuit being simulated. This is done in a conceptually similar way to that described in  (e.g. see Sections 4.4.2 and 4.6 in ), see also Appendix A of the present paper.
The Möbius-simplex. The Möbius-simplex in dimensions consists of points in whose coordinates are non-negative and sum to 1. We identify every point with the point , for all non-negative summing to 1. We use the following metric on the Möbius-simplex, letting be the standard distance on vectors:
How a consensus-halving solution encodes a point in the Möbius-simplex
Let be an instance of Consensus-halving, obtained by reduction from New variant high-D Tucker in dimensions, hence having c-e region . Note that, by Observation 4.1, at most cuts may lie in the c-e region. A set of cuts of the coordinate-encoding region splits it into pieces. We associate such a split with a point in as follows. The first coordinate is the distance from the LHS of the consensus-halving domain to the first cut, divided by , the length of the c-e region. For , the -th coordinate of is the distance between the -st and -th cuts, divided by . Remaining coordinates are 0.
If there are cuts in the c-e region, suppose we add a cut at either the LHS or the RHS. These two alternative choices correspond to a pair of points that have been identified as the same point, as described in Definition 17. (Observation 5.3 makes a similar point regarding transformed coordinates.)
Each circuit-encoder reads in “input” representing a point in the Möbius-simplex. Any circuit-encoder () behaves like on a point , for which (for all ) (recall is defined in (1)). Consequently their collective output (the split between and of the value held by the c-e agents) is the output of a single circuit-encoder averaged over a collection of points in the Möbius-simplex, all within of each other.
This follows by inspection of the way the circuit-encoders differ from each other: their sensor-agents are shifted but their internal circuitry is the same.
The Significant Region of the Möbius-simplex . The Significant Region of consists of all points in where no blanket-sensors are active (where “blanket-sensors” and “active” are defined in Definition 15).
There is an inverse-polynomial value such that all points in the Significant Region have coordinates that for differ from by at most , if is encoded by the c-e cuts of an -approximate solution to one of our instances of Consensus-halving. (Recall that ).
Thus, if an instance of Consensus-halving (obtained using our reduction) has a solution , then all the c-e cuts in have the property that the distance between two consecutive c-e cuts differs from 1 by at most some inverse-polynomial amount.
Before we proceed with the proof of the proposition, we will state a few simple lemmas that will be used throughout the proof. We start with the following definition.
Definition 19 (Cut -close to integer point)
For , we will say that a cut is -close to integer point , if it lies in . We will say that cut is -close to an integer point if there is some integer such that is -close to integer point .
Intuitively, cuts that are -close to integer points lie close (within distance at most ) to either the endpoints or the midpoint of some monitored interval .
Note that, by Definition 14, a blanket-sensor agent will be active when at least value-blocks of volume in an interval monitored by the blanket-sensor agent receive the same label. This will happen if there is a union of subintervals of some monitored subinterval , for some , which will have total length larger than , where . This means that in , there will be at least value-blocks of volume that receive the same label, since by construction, there are at least such value-blocks in any interval of length at least . In such a case, the blanket-sensor agent is active and the set of cuts is not a solution to . In the following, we will consider such that .
Definition 20 (Monochromatic interval of label )
An interval is a monochromatic interval if it is not intersected by any cuts (which means that it receives a single label). Additionally, if for , is labelled with , then we will say that is a monochromatic interval of label .
It should be clear that if any monitored interval has a large enough (larger than ) monochromatic subinterval, then the blanket-sensor agent is active.
For some ,with , consider the interval of length and assume that there are at most cuts in this interval. Then at least one of the blanket-sensors monitoring the subintervals in will be active.
In , there are at least intervals monitored by blanket-sensor agents and we only have at most cuts at our disposal. With cuts, we can partition an interval of length in at most intervals, the largest of which, call it , will have length at least . Since , the length of is actually larger than . The lemma follows then from the fact that, since the monitored intervals partition the interval , will contain a monochromatic interval of length at least , which will be entirely contained in some monitored interval, and the corresponding blanket-sensor agent will be active.
For some , with , consider the interval of length and assume that there are cuts in this interval. Then either
each of the cuts in will be -close to a different integer point and these integer points will be the midpoints of the monitored subintervals contained entirely in