This work addresses the following question: Which binary codes arise from the regions cut out by a collection of convex open sets in some Euclidean space? One such code is
which arises from the following convex open sets , , , and :
Here, some of the regions are labeled by the corresponding codewords in
. We can view each codeword as a vector inor as its support set, here a subset of .
A closely related question is: Which intersection patterns arise from a collection of convex sets? This question asks only which sets intersect, and not whether, for instance, there is a region where and intersect outside of . This problem has been studied extensively (see  for an overview), but the first question we posed has caught attention only recently [6, 8, 4, 5, 7, 9, 13, 15, 3, 11, 29, 19].
The recent interest in this area is motivated by neuroscience, specifically from the study of neurons called place cells. The discovery of place cells by O’Keefe et al. in 1971 was a major breakthrough that led to a shared 2014 Nobel Prize in Medicine or Physiology . A place cell encodes spatial information about an organism’s surroundings by firing precisely when the organism is in the corresponding place field. In this context, a codeword represents the neural firing pattern that occurs when the organism is in the corresponding region of its environment: the th coordinate is 1 if and only if the organism is in the place field of neuron . The resulting set of codewords is called a neural code.
Place fields can be modeled by convex open sets , so we are interested the following restatement of the question that opened this work: Which neural codes can arise from a collection of convex open sets? To address this problem, Giusti and Itskov identified a local obstruction, defined via the topology of a code’s simplicial complex, and proved that convex neural codes have no local obstructions . Codes without local obstructions are called locally good, as the obstruction prevents the code from arising from an arrangement of open sets that form a good cover (for instance, if the sets are convex). If such a good cover exists (for instance, from a collection of convex open sets), then the code is a good-cover code. Thus, we have:
is convex is a good-cover code is locally good.
The converse of the first implication is false . The second implication is the starting point of our work. We prove that the implication is in fact an equivalence: every locally good code is a good-cover code (Theorem 3.12). We also prove that the good-cover decision problem is undecidable (Theorem 4.3).
Next, we discover a new, stronger type of local obstruction that precludes a code from being convex (Theorem 5.10). Like the prior obstruction, the new obstruction is defined in terms of a code’s simplicial complex, but in this case the link of “missing” codewords must be “collapsible” (which is implied by “contractible”, the condition in the original type of obstruction). We call codes without the new obstruction locally great, and examine the corresponding decision problem. We prove that the locally-great decidability problem is decidable, and in fact NP-hard (Theorem 5.18).
Thus, our results refine the implications we saw earlier, as follows:
is convex is locally great is a good-cover code is locally good.
(NP-hard problem) (undecidable)
Finally, we add another implication to the end of those listed above, by noting that every locally good code can be realized by connected open sets, but not vice-versa (Proposition 2.19). Taken together, our results resolve fundamental questions in the theory of convex neural codes.
The outline of our work is as follows. Section 2 provides background on neural codes, local obstructions, and criteria for convexity. In Sections 3–5, we prove the results listed above, using classical tools from topology and combinatorics. Finally, our discussion in Section 6 lists open questions arising from our work.
Here we introduce notation as well as basic definitions in the theory of neural codes.
We define . We will reserve lowercase Greek letters (e.g., and ) to denote subsets of (for some ). Such a subset usually refers to a codeword in a neural code (Definition 2.1) or a face in a simplicial complex (Definition 2.6). For shorthand, we will omit the braces and commas; e.g., if and , we write , and .
2.1. Codes, simplicial complexes, and the nerve theorem
Given a collection of sets (place fields) in some stimulus space and some , let , where .
A neural code on neurons is a subset of , and each is a codeword. Any codeword that is maximal in with respect to set inclusion is a maximal codeword.
A code is realized by a collection of sets in a stimulus space if
Conversely, given a collection of sets , let denote the unique code realized by , via (2.1).
A neural code is often referred to as a hypergraph in the literature. Also, given a collection of subsets , the neural code that is realized by these subsets can be defined as the collection of sets , where varies over .
Every neural code can be realized by open sets  and by convex sets . We are interested, however, in realizing neural codes by sets that are both open and convex. This is because (biological) place fields are approximately convex and have positive measure, which are properties captured by convex open sets.
A neural code is:
convex if can be realized by a collection of convex open sets.
a good-cover code if can be realized by a collection of contractible open sets such that every nonempty intersection of sets in is also contractible. Such a is called a good cover.
connected if can be realized by a collection of open sets that are connected.
We revisit the code from the Introduction, where we saw that is convex. Hence, is a good-cover code and also a connected code.
Connected codes were classified recently by Mulas and Tran (not every code is connected). Good-cover codes are connected, but not vice-versa (see Proposition 2.19 later in this section).
Remark 2.5 (Codes and the empty set).
A code is convex (respectively, a good-cover code) if and only if is convex (respectively, a good-cover code) . Indeed, if and is realized by convex open sets (respectively, a good cover) , then is realized by , where is an open ball that contains a point from each region cut out by . Conversely, if and is realized by (with respect to some stimulus space), then the code is realized by with respect to the stimulus space .
An abstract simplicial complex on is a subset of that is closed under taking subsets. Each is a face of . The facets of are the faces that are maximal with respect to inclusion. The dimension of a face is , and the dimension of a simplicial complex , denoted by , is the maximum dimension of the faces of .
Every simplicial complex can be realized geometrically in a Euclidean space of sufficiently high dimension, and we let denote such a geometric realization (which is unique up to homeomorphism). Note that the dimension of a simplicial complex matches the dimension of its realization: .
For a face of a simplicial complex , the restriction of to is the simplicial complex
The link of in is the simplicial complex:
Links are usually written as , instead of , but, following , we prefer to have in the subscript, because we often consider the link in several simplicial complexes. Note that is the same as saying that , where is the topological join.
Let be a simplicial complex on . The cone over on a new vertex is the following simplicial complex on :
By construction, . We will use this fact throughout our work.
For a code on neurons, the simplicial complex of , denoted by , is the smallest simplicial complex on that contains .
Note that for , we have if and only if (cf. (2.1)). Also, the facets of are the maximal codewords of .
The code is convex and realized here:
The simplicial complex of the code, , is realized here:
A related notion to is the nerve of a cover (see Remark 2.12), which we define now.
Given a collection of sets , the nerve of , denoted by , is the simplicial complex on defined by:
Proposition 2.13 (Nerve theorem).
If is a finite collection of nonempty and contractible open sets that cover a paracompact space such that every intersection of sets is either empty or contractible, then is homotopy equivalent to .
2.2. Local obstructions and criteria for convexity
One way to detect non-convexity of a neural code is to find what is known as a local obstruction.
Given a code that is realized by open sets , a local obstruction is a pair for which
where and is not contractible. A code with no local obstructions is locally good.
The name “local obstruction” is due to the following result, which states that if a code has a local obstruction, then it is not a good-cover code, and therefore not convex.
As for the first implication, the converse is in general false (but true for codes in which all codewords have size at most two  and codes on up to four neurons ). The first counterexample, which is a code on five neurons, was found by Lienkaemper, Shiu, and Woodstock :
Proposition 2.16 (Counterexample code ).
The following neural code is a good-cover code, but non-convex: .
This code, it turns out, is realizable by closed convex sets instead of open convex sets .
Returning to the topic of detecting local obstructions, the next result gives a way to do so that is more efficient than simply applying the definition (Definition 2.14). As it turns out, we need to check only the links of faces that are intersections of facets of .
The set of mandatory codewords of a simplicial complex is is not contractible. The set of mandatory codewords of a code is .
Proposition 2.18 (Curto et al. ).
A code is locally good if and only if it contains all its mandatory codewords (i.e., ). Also, every mandatory codeword is an intersection of maximal codewords.
As a corollary, max-intersection complete codes, those that are closed under taking intersections of maximal codewords, are locally good. In fact, these codes are convex . Note that the (non-convex) counterexample code from Proposition 2.16 is not max-intersection complete, because the intersection of maximal codewords is missing from .
Another result pertaining to convexity, due to Cruz et al. , is as follows: for codes with the same simplicial complex, convexity is a monotone property with respect to inclusion. That is, if is convex, and where , then is convex.
We end this section by showing that good-cover codes are realizable by connected open sets, but not vice-versa:
Every good-cover code is connected, but not every connected code is a good-cover code.
Contractible sets are connected, so, by definition, good-cover codes are connected. As for the converse, consider the following code: . The codeword is a mandatory codeword, as , while is the following non-contractible simplicial complex:
Thus, by definition, is not locally good, and so, by Proposition 2.18, is not a good-cover code. We complete the proof by displaying the following realization of by connected, open sets in :
More precisely, for , consider (the closures of) the regions above that are labeled by some for which . Now let be the interior of the union of all such regions. ∎
3. Locally good codes are good-cover codes
In this section we will prove that being locally good is equivalent to being a good-cover code (Theorem 3.12). We accomplish this by constructing a good-cover realization of any locally good code. Our construction has two steps. We first realize any code via (not necessarily open) subsets of a geometric realization of its simplicial complex (Proposition 3.2), and then do a “reverse deformation retract” to obtain a realization by open sets (Proposition 3.9).
3.1. Code-complex realizations
The idea behind the following construction, which realizes any code by subsets of , is to realize codewords in by the corresponding faces of . Thus, we will simply delete faces corresponding to codewords that are not in . Accordingly, for any simplicial complex and any , let denote the relative interior of the realization of the face within (if is a vertex, then ). See the left-hand side of Figure 1. It follows that . Thus, is a CW-complex built from the ’s.
Let be a code on neurons. For each , consider the following subset of :
Then is the code-complex realization of .
Let be a code on neurons. Then the code-complex realization of given in Definition 3.1 is a realization of . Moreover, if is locally good, then every nonempty intersection of sets from is contractible.
We postpone the proof of Proposition 3.2 to the end of this subsection.
Consider the (locally good) code . The code-complex realization of given in Definition 3.1 is depicted here, along with :
For the (not locally good) code , the code-complex realization is:
Comparing Examples 3.5 and 3.6, note that, consistent with Proposition 3.2, the sets in the first example are contractible and have contractible intersections, but not in the second example ( is not connected).
The proof of Proposition 3.2 relies on the following lemma, which may be of independent interest.
Let be a simplicial complex, and let be a subset of the non-mandatory codewords of . Then, if is contractible, then so is the code-complex realization of the code .
We prove Lemma 3.7 in Appendix A.
Proof of Proposition 3.2.
By construction, we have
and the subset realizes the codeword . If , define the stimulus space ; otherwise, define to be the ambient Euclidean space. Thus, realizes with respect to .
Now assume that is locally good. We must show that every intersection of the ’s is empty or contractible. To this end, let , and assume that is nonempty. Note that
We consider two cases. If , then . Now consider the deformation retract of to the face , arising from orthogonal projection to that face. It is straightforward to check that this restricts to a deformation retraction of to , which is contractible. Thus, is contractible.
Consider the remaining case, when . Then . We will show that is contractible by proving first that is homotopy equivalent to the code-complex realization of the following link of a code [3, Definition 2.3] or, for short, “code-link”:
and then proving that this code-complex realization, which we denote by , is contractible.
To see that , we compute:
To see that is contractible, we will appeal to Lemma 3.7 (where and is the code-link in equation (3.1)). To apply Lemma 3.7, we must show that every in but not in the code-link (3.1) is not a mandatory codeword of . To this end, note that such a satisfies and . Thus, because is locally good, the link is contractible, and thus is not a mandatory codeword of .
We now apply Lemma 3.7: the link is contractible (because is locally good and ), so also is contractible. Hence, is contractible. ∎
3.2. Main result
To this end, for any , let be the complete simplicial complex on , and consider an embedding of in
. The facet-defining hyperplanes ofform a hyperplane arrangement whose open chambers correspond to nonempty subsets of (see Figure 1). Let denote the closure of the chamber that corresponds to . Choose an ordering of the nonempty faces of so that they are nondecreasing in dimension: . We define recursively the following sets (see the right-hand side of Figure 1):
We claim that the order of the ’s does not matter. Indeed, if (with ), then the intersection is the (possibly empty) face of . This face is indexed by some , and so this face, regardless of whether or , is neither in nor .
We make the following several observations. First, the ’s are disjoint, and each is convex and full-dimensional. Also, deformation retracts to via a deformation retract of to . Finally, the interior of is the open chamber of the hyperplane arrangement of ’s facet-defining hyperplanes.
In analogy to how we built the sets from the sets (in Definition 3.1), we now build sets from the sets (which, we noted, deformation retract to the sets ).
Let be a code on neurons. For each , consider the following subset of :
where is as in (3.4). Then is a realization of .
By construction, we have
and the subset realizes the codeword . Define the stimulus space . Then the collection realizes with respect to . ∎
Let be a code on neurons. For each , consider the following subset of :
where is as in (3.5). Then is a realization of . Moreover, if is locally good, then is a good-cover realization of .
Before proving Proposition 3.9, we give an example of the realization given in the proposition.
Proof of Proposition 3.9.
By construction, the ’s are open. So, we must show that (1) the ’s realize , that is, , and (2) if is locally good, then is a good cover.
Proof of (1). We begin by proving the containment , where we define the stimulus space to be (so, ). Take . We must show that . Let be in the interior of . Then, by construction, , so .
To prove the remaining containment, , let . As explained above, . Let . We consider two cases. If is not on a facet-defining hyperplane of , then is in the interior of some . It is straightforward to check that , so , and thus .
In the second case, is on exactly facet-defining hyperplanes of (for some ). Crossing exactly one such hyperplane means going from some region to some region (for some ), or vice-versa. Thus, a small neighborhood of intersects exactly regions : precisely those with
for some with .
Recall that is in the open set , so , and thus the interiors of the regions are contained in . So, all sets given in (3.6) are in the code . Thus, to show that , it suffices to show that . The containment follows from the fact that (as explained above). We prove by contradiction. Assume that there exists . Then we have:
Thus, , where , which contradicts the choice of . So, holds.
Proof of (2). Assume that is locally good. Let , and assume that is nonempty. We must show that is contractible. By Proposition 3.2, is contractible, so it is enough to show that . We will prove this by showing that the set deformation retracts to and also weak deformation retracts  to . It is straightforward to check that , and is a (full-dimensional) set that deformation retracts to . Thus, deformation retracts to . Finally, we obtain a weak deformation retract of to via the following homotopy: we simultaneously translate each facet-defining hyperplane of the simplex a small distance away from the simplex, so that the subset of that is “swept up” is pushed into . ∎
It is natural to ask whether a closed-set version of Proposition 3.9 holds:
Is every locally good code a closed-good-cover code?
We could try to resolve this question by “closing” our open-good-cover realizations; we ask, For a locally good code , is a closed-good-cover realization of , where is as in Proposition 3.9? However, this is not true. Indeed, it is easy to check that for the locally good code , the open realization , when we take closures, yields a code with the “extra” codeword 123.
A code is locally good if and only if it is a good-cover code.
Finally, note that the good-cover realizations from Proposition 3.9 are embedded in an -dimensional Euclidean space, where is the number of neurons. We ask, Is this embedding dimension, in general, strict?
4. Undecidability of the good-cover decision problem
Having shown that being locally good is equivalent to being a good-cover code (Theorem 3.12), we now prove that the corresponding decision problem is undecidable (Theorem 4.3). The proof hinges on the undecidability of determining whether a homology ball is contractible (Lemma 4.2) .
We say that a code is -sparse if its simplicial complex has dimension at most :
A code is -sparse if for all .
Lemma 4.2 (Tancer ).
The problem of deciding whether a given 4-dimensional simplicial complex is contractible is undecidable.
Theorem 4.3 (Undecidability of the good-cover decision problem).
The problem of deciding whether a 5-sparse code is locally good (or, equivalently, has a good cover) is undecidable.
Given any 4-dimensional simplicial complex , consider the cone over on a new vertex . This cone is itself a simplicial complex, which we denote by . Let denote the neural code , which is 5-sparse. Note that , so the only codeword in that is missing from is . So, by Proposition 2.18, the code is locally good if and only if is contractible. Thus, any algorithm that could decide whether is locally good would also decide whether is contractible, which is impossible by Lemma 4.2. ∎
4.1. Decidability for 3-sparse and 4-sparse codes
Can the condition of 5-sparsity in Theorem 4.3 can be extended to 4-sparsity or even 3-sparsity? For 3-sparsity, the answer is “no”:
The problem of determining whether a 3-sparse code is locally good (or, equivalently, has a good cover) is decidable.
It is straightforward to write an algorithm that, for a given code , enumerates the nonempty intersections of maximal codewords. By Proposition 2.18, is locally good if and only if the links of these intersections are all contractible. Hence, we are interested in the decision problem for determining whether these links are contractible. When is 3-sparse, these links are 2-sparse, i.e., (undirected) graphs. Contractible graphs are precisely trees (connected graphs without cycles), and the problem of determining whether a graph is a tree is decidable. ∎
4.2. Relation to the convexity decision problem
For codes without restriction on the sparsity, we revisit, from the proof of Theorem 4.3, codes of the form , where is a cone over a simplicial complex on a new vertex v. Consider the case when is contractible. Does it follow that is convex? If it were, then by an argument analogous to the proof of Theorem 4.3, the convexity decision problem would be undecidable. However, we will see in the next section that there exist such codes that are non-convex (Example 5.14). Indeed, the convexity decision problem is unresolved, so we pose it here.
Is the problem of determining whether a code is convex, decidable?
In the next section, we will introduce a superset of all convex codes, the “locally great” codes, and show that the corresponding decision problem is decidable (Theorem 5.18).
5. A new, stronger local obstruction to convexity
Recall that a code has no local obstructions if and only if it contains all its mandatory codewords (Proposition 2.18), and these mandatory codewords are precisely the faces of the simplicial complex whose link is non-contractible. We prove in this section that by replacing “non-contractible” by “non-collapsible” (Definition 5.1) we obtain a stronger type of local obstruction to convexity (Theorem 5.10).
This result yields a new family of codes that, like the earlier counterexample code (Proposition 2.16), are locally good, but not convex. This new family comprises codes of the form , where is a cone, on a new vertex , over a contractible but non-collapsible simplicial complex (see Example 5.14). Such a code (as we saw in the proof of Theorem 4.3) is missing only one codeword (namely, ) from its simplicial complex (because ). We will use this several times in this section.
5.1. Background on collapses
First we make note of a somewhat overloaded definition in the literature. The notion of a collapse of a simplicial complex was introduced by Whitehead in 1938 . The more general concept of -collapse was introduced by Wegner in 1975 .
Let be a simplicial complex, and let be the set of its facets.
For any face of such that there is a unique for which , we define
and say that is an elementary d-collapse of induced by (or ). This elementary -collapse is denoted by . (Here refers to the constraint dim , but in this work we will let be arbitrarily high when we use the term “-collapse”). A sequence of elementary -collapses
is a d-collapse of to .
An elementary collapse is an elementary -collapse induced by a face that is not a facet (i.e., ). A sequence of elementary collapses starting with and ending with is a collapse of to . Finally, a simplicial complex is collapsible if it collapses to a point (via some sequence).
Following , our Definition 5.1(2) characterizes “collapsible” in terms of elementary collapses induced by pairs with . An equivalent definition, also used in the literature [1, 2], is via elementary collapses under a stronger condition: . For completeness, we prove in Appendix B that these two definitions of collapsible are equivalent (Proposition B.2).
Example 5.3 (A -collapse).
The following is an elementary -collapse defined by :
Example 5.4 (A collapse).
The following is a collapse to a point:
This collapse arises from the elementary collapses induced by, respectively, , , and .
Comparing Examples 5.3 and 5.4, note that the homotopy type was not preserved throughout the -collapse, but was preserved through the collapse. This is explained in the next two results, the first of which is well known (see, e.g., [2, Ch. 1.2] or ).
Elementary collapses preserve homotopy type. Thus, collapsible implies contractible.
However, not every contractible simplicial complex is collapsible. One example is any triangulation of the 2-dimensional topological space known as Bing’s house with two rooms [20, Ex. 1.6.1]. Another example is the dunce hat .
Lemma 5.5 extends as follows:
Let be a contractible simplicial complex, and let be an elementary -collapse induced by a face . Assume that contains a nonempty face. Then is contractible if and only if is not a facet (i.e., is an elementary collapse).
The backward direction is immediate from Lemma 5.5. We prove the contrapositive of the forward direction. Suppose that is a facet of . Then , where denotes the the topological realization of as a facet of . With an eye toward using the Mayer-Vietoris sequence, we note that the intersection is the boundary of the simplex and thus is a sphere and has a non-vanishing homology group. Applying the Mayer-Vietoris sequence for homology to , and using the hypothesis that is contractible, we conclude that has a non-vanishing homology group, and therefore is not contractible. ∎
Lemma 5.7 (Wegner ).
Let be a collection of nonempty convex (not necessarily open) sets in . Let be the nerve of . Then there exists an open halfspace in such that, letting denote the nerve of the collection , the following is an elementary -collapse: .
Informally speaking, Wegner proved Lemma 5.7 by sweeping a hyperplane from infinity across , deleting everything in its path, until an intersection region corresponding to a facet of has been removed. This yields the elementary -collapse induced by some face contained in .
Recent work of Itskov, Kunin, and Rosen is similar in spirit to ours: they show that the collapsibility of a certain simplicial complex associated to a code (the “polar complex”) ensures that the code avoids certain obstructions to being a certain type of convex code (namely, a code arising from a “non-degenerate” hyperplane arrangement) [11, §6.5].
5.2. A key lemma
This subsection contains the key result (Lemma 5.9) that allows us to establish, via Theorem 5.10, our new local obstruction. Lemma 5.9 states that for an open cover by convex sets of a set that is itself convex, the corresponding nerve is collapsible. The original local obstruction (Proposition 2.15) relied on a weaker version of this result, which states only that such a nerve is contractible. Accordingly, the way we use Lemma 5.9 to prove Theorem 5.10 is analogous to how the authors of [8, 4] used the “contractible” version of the lemma to establish the original notion of local obstruction.
Let be a collection of convex open sets in such that their union is nonempty and convex. Then the nerve of is collapsible.
Let denote the nerve of . Let denote the number of nonempty faces of . Note that , as the union of the ’s is nonempty. We proceed by induction on .
Base case: . Then is a point, and thus is collapsible.
Inductive step: . Assume that the lemma is true for all nerves with at most nonempty faces. First consider the case when has only one facet. Then is a simplex, and every simplex is collapsible.
Now consider the remaining case, when has at least two facets. Without loss of generality, each is nonempty (deleting ’s that are empty does not affect the union or the nerve). So, by Lemma 5.7, there exists an open halfspace such that is an elementary -collapse, where is the nerve of . We see that is a collection of convex open sets whose union, , is convex (because both and are convex) and nonempty (indeed, the nerve is nonempty, because has at least two facets and so at least one was unaffected by the elementary -collapse ). Thus, by the induction hypothesis, is collapsible.
Thus, by definition of collapsible, we need only show that the elementary -collapse was in fact an elementary collapse. To see this, note that the nerve theorem (Proposition 2.13) implies that the nerve is homotopy equivalent to , which we saw above is convex (and nonempty) and thus contractible. Hence, by Lemma 5.6, is an elementary collapse. This completes the proof. ∎
5.3. Locally great codes
The following result gives a new class of local obstructions.
Let be a convex code. Then for any , the link is collapsible.
Let be convex open sets (in some stimulus space ) that realize . Let . Then . Thus, is a collection of convex open sets such that their union equals (and thus this union is nonempty and convex). So, by Lemma 5.9, the nerve of is collapsible. It is straightforward to check that this nerve equals . So, as desired, the link is collapsible. ∎
A code has a local obstruction of the second kind if there exists such that the link is not collapsible. If has no local obstructions of the second kind, is locally great.
is convex is locally great is locally good.
Neither implication in Corollary 5.12 is an equivalence, as we see in the following examples.
Example 5.13 (Locally great does not imply convex).
The counterexample code from Proposition 2.16 is non-convex , but, we claim, locally great. To verify this, we must check that for every missing codeword , the corresponding link is collapsible. We accomplish this as follows:
Example 5.14 (Locally good does not imply locally great).
Let be a triangulation of Bing’s house, so is a 2-dimensional simplicial complex that is contractible but not collapsible [20, Ex. 1.6.1]. Let be a cone over on a new vertex , and consider the code .
We claim that is locally good, but not locally great. Indeed, the only “missing” codeword of is , and its link is , which is contractible but not collapsible. Thus, is locally good (by Proposition 2.18), but not locally great (by definition).
We now consider the question of whether there might be an even stronger local obstruction beyond “locally great” (perhaps “locally excellent”?). That is, can the conclusion of Lemma 5.9 be strengthened from “collapsible” to some more general property of simplicial complexes? Or, on the contrary, does the converse of Lemma 5.9 hold? Accordingly, we pose the following question:
Is every collapsible simplicial complex the nerve of a convex open cover of some convex set?
An answer to Question 5.15 in the affirmative would mean that there is not a notion of “locally excellent”.
Some decision problems related to Question 5.15 have been resolved. The problem of deciding whether a simplicial complex is the nerve of a convex open cover of some (not necessarily convex) subset of is decidable [25, 22]. On the other hand, the problem of deciding whether a simplicial complex is the nerve of a good cover of some subset of , when , is undecidable .
5.4. The locally-great decision problem
In this subsection, we prove that the locally-great decision problem is decidable. This result relies on the following lemma.
Lemma 5.17 (Tancer ).
The problem of deciding whether a given simplicial complex is collapsible is NP-complete.
Theorem 5.18 (Decidability of the locally-great decision problem).
The problem of deciding whether a code is locally great is NP-hard.
By Definition 5.11 and Lemma 5.17, the following steps form an algorithm that determines whether a code is locally great: enumerate and the corresponding links, and then check whether any of these links is collapsible.
To show this is NP-hard, it suffices to reduce (in polynomial time) the problem of deciding whether a simplicial complex is collapsible (which is NP-complete by Lemma 5.17) to the locally-great decision problem. To this end, we proceed as in the proof of Theorem 4.3: given any simplicial complex , consider the cone over on a new vertex . This cone is itself a simplicial complex, which we denote by . Let . Then, is the only codeword in that is missing from . So, by Definition 5.11, the original simplicial complex is collapsible if and only if the code is locally great. ∎
We return to the question that opened this work, Which neural codes are convex? There is a growing literature tackling this question, and here we resolved some foundational problems in this developing theory. In summary, we now know the following:
is convex is locally great