Neural codes, decidability, and a new local obstruction to convexity

03/30/2018 ∙ by Aaron Chen, et al. ∙ The University of Chicago cornell university Texas A&M University 0

Given an intersection pattern of arbitrary sets in Euclidean space, is there an arrangement of convex open sets in Euclidean space that exhibits the same intersections? This question is combinatorial and topological in nature, but is motivated by neuroscience. Specifically, we are interested in a type of neuron called a place cell, which fires precisely when an organism is in a certain region, usually convex, called a place field. The earlier question, therefore, can be rephrased as follows: Which neural codes, that is, patterns of neural activity, can arise from a collection of convex open sets? To address this question, Giusti and Itskov proved that convex neural codes have no "local obstructions," which are defined via the topology of a code's simplicial complex. Codes without local obstructions are called locally good, because the obstruction precludes the code from arising from open sets that form a good cover. In other words, every good-cover code is locally good. Here we prove the converse: Every locally good code is a good-cover code. We also prove that the good-cover decision problem is undecidable. Finally, we reveal a stronger type of local obstruction that prevents a code from being convex, and prove that the corresponding decision problem is NP-hard. Our proofs use combinatorial and topological methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

This work addresses the following question: Which binary codes arise from the regions cut out by a collection of convex open sets in some Euclidean space? One such code is

which arises from the following convex open sets , , , and :

codeword 0011=34

codeword 1110=123

codeword 0001=4

Here, some of the regions are labeled by the corresponding codewords in

. We can view each codeword as a vector in

or as its support set, here a subset of .

A closely related question is: Which intersection patterns arise from a collection of convex sets? This question asks only which sets intersect, and not whether, for instance, there is a region where and intersect outside of . This problem has been studied extensively (see [22] for an overview), but the first question we posed has caught attention only recently [6, 8, 4, 5, 7, 9, 13, 15, 3, 11, 29, 19].

The recent interest in this area is motivated by neuroscience, specifically from the study of neurons called place cells. The discovery of place cells by O’Keefe et al. in 1971 was a major breakthrough that led to a shared 2014 Nobel Prize in Medicine or Physiology  [18]. A place cell encodes spatial information about an organism’s surroundings by firing precisely when the organism is in the corresponding place field. In this context, a codeword represents the neural firing pattern that occurs when the organism is in the corresponding region of its environment: the th coordinate is 1 if and only if the organism is in the place field of neuron . The resulting set of codewords is called a neural code.

Place fields can be modeled by convex open sets [5], so we are interested the following restatement of the question that opened this work: Which neural codes can arise from a collection of convex open sets? To address this problem, Giusti and Itskov identified a local obstruction, defined via the topology of a code’s simplicial complex, and proved that convex neural codes have no local obstructions [8]. Codes without local obstructions are called locally good, as the obstruction prevents the code from arising from an arrangement of open sets that form a good cover (for instance, if the sets are convex). If such a good cover exists (for instance, from a collection of convex open sets), then the code is a good-cover code. Thus, we have:

is convex is a good-cover code is locally good.

The converse of the first implication is false [15]. The second implication is the starting point of our work. We prove that the implication is in fact an equivalence: every locally good code is a good-cover code (Theorem 3.12). We also prove that the good-cover decision problem is undecidable (Theorem 4.3).

Next, we discover a new, stronger type of local obstruction that precludes a code from being convex (Theorem 5.10). Like the prior obstruction, the new obstruction is defined in terms of a code’s simplicial complex, but in this case the link of “missing” codewords must be “collapsible” (which is implied by “contractible”, the condition in the original type of obstruction). We call codes without the new obstruction locally great, and examine the corresponding decision problem. We prove that the locally-great decidability problem is decidable, and in fact NP-hard (Theorem 5.18).

Thus, our results refine the implications we saw earlier, as follows:

is convex is locally great is a good-cover code is locally good.

(NP-hard problem)       (undecidable)

Finally, we add another implication to the end of those listed above, by noting that every locally good code can be realized by connected open sets, but not vice-versa (Proposition 2.19). Taken together, our results resolve fundamental questions in the theory of convex neural codes.

The outline of our work is as follows. Section 2 provides background on neural codes, local obstructions, and criteria for convexity. In Sections 35, we prove the results listed above, using classical tools from topology and combinatorics. Finally, our discussion in Section 6 lists open questions arising from our work.

2. Background

Here we introduce notation as well as basic definitions in the theory of neural codes.

We define . We will reserve lowercase Greek letters (e.g., and ) to denote subsets of (for some ). Such a subset usually refers to a codeword in a neural code (Definition 2.1) or a face in a simplicial complex (Definition 2.6). For shorthand, we will omit the braces and commas; e.g., if and , we write , and .

2.1. Codes, simplicial complexes, and the nerve theorem

Given a collection of sets (place fields) in some stimulus space and some , let , where .

Definition 2.1.
  1. A neural code on neurons is a subset of , and each is a codeword. Any codeword that is maximal in with respect to set inclusion is a maximal codeword.

  2. A code is realized by a collection of sets in a stimulus space if

    (2.1)

    Conversely, given a collection of sets , let denote the unique code realized by , via (2.1).

Remark 2.2.

A neural code is often referred to as a hypergraph in the literature. Also, given a collection of subsets , the neural code that is realized by these subsets can be defined as the collection of sets , where varies over .

Every neural code can be realized by open sets [5] and by convex sets [6]. We are interested, however, in realizing neural codes by sets that are both open and convex. This is because (biological) place fields are approximately convex and have positive measure, which are properties captured by convex open sets.

Definition 2.3.

A neural code is:

  1. convex if can be realized by a collection of convex open sets.

  2. a good-cover code if can be realized by a collection of contractible open sets such that every nonempty intersection of sets in is also contractible. Such a is called a good cover.

  3. connected if can be realized by a collection of open sets that are connected.

Example 2.4.

We revisit the code from the Introduction, where we saw that is convex. Hence, is a good-cover code and also a connected code.

Connected codes were classified recently by Mulas and Tran (not every code is connected) 

[17]. Good-cover codes are connected, but not vice-versa (see Proposition 2.19 later in this section).

Remark 2.5 (Codes and the empty set).

A code is convex (respectively, a good-cover code) if and only if is convex (respectively, a good-cover code) [4]. Indeed, if and is realized by convex open sets (respectively, a good cover) , then is realized by , where is an open ball that contains a point from each region cut out by . Conversely, if and is realized by (with respect to some stimulus space), then the code is realized by with respect to the stimulus space .

Convex codes on up to four neurons have been classified [4]. This classification was enabled by analyzing codes according to simplicial complex they generate (see Definition 2.9).

Definition 2.6.

An abstract simplicial complex on is a subset of that is closed under taking subsets. Each is a face of . The facets of are the faces that are maximal with respect to inclusion. The dimension of a face is , and the dimension of a simplicial complex , denoted by , is the maximum dimension of the faces of .

Every simplicial complex can be realized geometrically in a Euclidean space of sufficiently high dimension, and we let denote such a geometric realization (which is unique up to homeomorphism). Note that the dimension of a simplicial complex matches the dimension of its realization: .

Definition 2.7.

For a face of a simplicial complex , the restriction of to is the simplicial complex

The link of in is the simplicial complex:

Links are usually written as , instead of , but, following [4], we prefer to have in the subscript, because we often consider the link in several simplicial complexes. Note that is the same as saying that , where is the topological join.

Definition 2.8.

Let be a simplicial complex on . The cone over on a new vertex is the following simplicial complex on :

By construction, . We will use this fact throughout our work.

Definition 2.9.

For a code on neurons, the simplicial complex of , denoted by , is the smallest simplicial complex on that contains .

Note that for , we have if and only if (cf. (2.1)). Also, the facets of are the maximal codewords of .

Example 2.10.

The code is convex and realized here:

The simplicial complex of the code, , is realized here:

A related notion to is the nerve of a cover (see Remark 2.12), which we define now.

Definition 2.11.

Given a collection of sets , the nerve of , denoted by , is the simplicial complex on defined by:

Remark 2.12.

.

Next, we recall the classical result called the nerve theorem or nerve lemma [27]. The version we state is [10, Corollary 4G.3]:

Proposition 2.13 (Nerve theorem).

If is a finite collection of nonempty and contractible open sets that cover a paracompact space such that every intersection of sets is either empty or contractible, then is homotopy equivalent to .

Metric spaces are paracompact [21], so good-cover realizations of codes satisfy the hypotheses of Proposition 2.13. Thus, if we determine that a code does not satisfy the conclusion of this proposition, then we conclude it is not convex. We turn to this topic now.

2.2. Local obstructions and criteria for convexity

One way to detect non-convexity of a neural code is to find what is known as a local obstruction.

Definition 2.14.

Given a code that is realized by open sets , a local obstruction is a pair for which

where and is not contractible. A code with no local obstructions is locally good.

Definition 2.14 does not depend on the choice of open sets  [4].

The name “local obstruction” is due to the following result, which states that if a code has a local obstruction, then it is not a good-cover code, and therefore not convex.

Proposition 2.15 (Giusti and Itskov [8]; Curto et al. [4]).
(2.2)

A natural question is: Do the converses of the implications in (2.2) hold? For the second implication, the converse is true; we will prove this in the next section (Theorem 3.12).

As for the first implication, the converse is in general false (but true for codes in which all codewords have size at most two [13] and codes on up to four neurons [4]). The first counterexample, which is a code on five neurons, was found by Lienkaemper, Shiu, and Woodstock [15]:

Proposition 2.16 (Counterexample code [15]).

The following neural code is a good-cover code, but non-convex: .

This code, it turns out, is realizable by closed convex sets instead of open convex sets [3].

Returning to the topic of detecting local obstructions, the next result gives a way to do so that is more efficient than simply applying the definition (Definition 2.14). As it turns out, we need to check only the links of faces that are intersections of facets of .

Definition 2.17.

The set of mandatory codewords of a simplicial complex is is not contractible. The set of mandatory codewords of a code is .

Proposition 2.18 (Curto et al. [4]).

A code is locally good if and only if it contains all its mandatory codewords (i.e., ). Also, every mandatory codeword is an intersection of maximal codewords.

As a corollary, max-intersection complete codes, those that are closed under taking intersections of maximal codewords, are locally good. In fact, these codes are convex [3]. Note that the (non-convex) counterexample code from Proposition 2.16 is not max-intersection complete, because the intersection of maximal codewords is missing from .

Another result pertaining to convexity, due to Cruz et al. [3], is as follows: for codes with the same simplicial complex, convexity is a monotone property with respect to inclusion. That is, if is convex, and where , then is convex.

We end this section by showing that good-cover codes are realizable by connected open sets, but not vice-versa:

Proposition 2.19.

Every good-cover code is connected, but not every connected code is a good-cover code.

Proof.

Contractible sets are connected, so, by definition, good-cover codes are connected. As for the converse, consider the following code: . The codeword is a mandatory codeword, as , while is the following non-contractible simplicial complex:

Thus, by definition, is not locally good, and so, by Proposition 2.18, is not a good-cover code. We complete the proof by displaying the following realization of by connected, open sets in :

More precisely, for , consider (the closures of) the regions above that are labeled by some for which . Now let be the interior of the union of all such regions. ∎

3. Locally good codes are good-cover codes

In this section we will prove that being locally good is equivalent to being a good-cover code (Theorem 3.12). We accomplish this by constructing a good-cover realization of any locally good code. Our construction has two steps. We first realize any code via (not necessarily open) subsets of a geometric realization of its simplicial complex (Proposition 3.2), and then do a “reverse deformation retract” to obtain a realization by open sets (Proposition 3.9).

3.1. Code-complex realizations

The idea behind the following construction, which realizes any code by subsets of , is to realize codewords in by the corresponding faces of . Thus, we will simply delete faces corresponding to codewords that are not in . Accordingly, for any simplicial complex  and any , let denote the relative interior of the realization of the face within (if is a vertex, then ). See the left-hand side of Figure 1. It follows that . Thus, is a CW-complex built from the ’s.

Definition 3.1.

Let be a code on neurons. For each , consider the following subset of :

Then is the code-complex realization of .

Figure 1. The correspondence between the face-interiors that make up the -simplex and the regions that deformation retract to the ’s (see equation (3.4)). We define the region to contain the face-interior labeled by , and to contain the dashed lines of its boundary and the vertex (so it is closed), which is why in Proposition 3.9 we must pass from the unions of regions to their relative interiors.
Proposition 3.2.

Let be a code on neurons. Then the code-complex realization of given in Definition 3.1 is a realization of . Moreover, if is locally good, then every nonempty intersection of sets from is contractible.

We postpone the proof of Proposition 3.2 to the end of this subsection.

Remark 3.3.

Code-complex realizations (Definition 3.1) are in general not simplicial complexes or even CW-complexes. See Examples 3.5 and 3.6, where the sets and their unions are not even closed.

Remark 3.4.

Code-complex realizations (Definition 3.1) are somewhat similar to prior constructions that also specify regions that correspond to codewords [4, 3] or intersection patterns [22].

Example 3.5.

Consider the (locally good) code . The code-complex realization of given in Definition 3.1 is depicted here, along with :

(1)

(12)

(123)

(23)

(12)

(2)

(123)

(23)

(123)

(1)

(2)

(3)

(12)

(13)

(23)

(123)

Example 3.6.

For the (not locally good) code , the code-complex realization is:

(1)

(13)

(23)

(23)

(13)

Comparing Examples 3.5 and 3.6, note that, consistent with Proposition 3.2, the sets in the first example are contractible and have contractible intersections, but not in the second example ( is not connected).

The proof of Proposition 3.2 relies on the following lemma, which may be of independent interest.

Lemma 3.7.

Let be a simplicial complex, and let be a subset of the non-mandatory codewords of . Then, if is contractible, then so is the code-complex realization of the code .

We prove Lemma 3.7 in Appendix A.

Proof of Proposition 3.2.

By construction, we have

and the subset realizes the codeword . If , define the stimulus space ; otherwise, define to be the ambient Euclidean space. Thus, realizes with respect to .

Now assume that is locally good. We must show that every intersection of the ’s is empty or contractible. To this end, let , and assume that is nonempty. Note that

We consider two cases. If , then . Now consider the deformation retract of to the face , arising from orthogonal projection to that face. It is straightforward to check that this restricts to a deformation retraction of to , which is contractible. Thus, is contractible.

Consider the remaining case, when . Then . We will show that is contractible by proving first that is homotopy equivalent to the code-complex realization of the following link of a code [3, Definition 2.3] or, for short, “code-link”:

(3.1)

and then proving that this code-complex realization, which we denote by , is contractible.

To see that , we compute:

(3.2)

and

(3.3)

It is straightforward to see, from (3.2) and (3.3), that deformation retracts to a copy of .

To see that is contractible, we will appeal to Lemma 3.7 (where and is the code-link in equation (3.1)). To apply Lemma 3.7, we must show that every in but not in the code-link (3.1) is not a mandatory codeword of . To this end, note that such a satisfies and . Thus, because is locally good, the link is contractible, and thus is not a mandatory codeword of .

We now apply Lemma 3.7: the link is contractible (because is locally good and ), so also is contractible. Hence, is contractible. ∎

3.2. Main result

Our next step is to modify, via a “reverse deformation retract”, the realization from Proposition 3.2 so that the sets are open (Proposition 3.9).

To this end, for any , let be the complete simplicial complex on , and consider an embedding of in

. The facet-defining hyperplanes of

form a hyperplane arrangement whose open chambers correspond to nonempty subsets of (see Figure 1). Let denote the closure of the chamber that corresponds to . Choose an ordering of the nonempty faces of so that they are nondecreasing in dimension: . We define recursively the following sets (see the right-hand side of Figure 1):

(3.4)

We claim that the order of the ’s does not matter. Indeed, if (with ), then the intersection is the (possibly empty) face of . This face is indexed by some , and so this face, regardless of whether or , is neither in nor .

We make the following several observations. First, the ’s are disjoint, and each is convex and full-dimensional. Also, deformation retracts to via a deformation retract of to . Finally, the interior of is the open chamber of the hyperplane arrangement of ’s facet-defining hyperplanes.

In analogy to how we built the sets from the sets (in Definition 3.1), we now build sets from the sets (which, we noted, deformation retract to the sets ).

Proposition 3.8.

Let be a code on neurons. For each , consider the following subset of :

(3.5)

where is as in (3.4). Then is a realization of .

Proof.

By construction, we have

and the subset realizes the codeword . Define the stimulus space . Then the collection realizes with respect to . ∎

The sets in Proposition 3.8 are in general not open sets (see Figure 1), but they are unions of full-dimensional convex sets, so we now consider the interiors.

Proposition 3.9.

Let be a code on neurons. For each , consider the following subset of :

where is as in (3.5). Then is a realization of . Moreover, if is locally good, then is a good-cover realization of .

Before proving Proposition 3.9, we give an example of the realization given in the proposition.

Example 3.10.

We return to the locally good code from Example 3.5. The good-cover realization of from Proposition 3.9, along with , is depicted here:

Proof of Proposition 3.9.

By construction, the ’s are open. So, we must show that (1) the ’s realize , that is, , and (2) if is locally good, then is a good cover.

Proof of (1). We begin by proving the containment , where we define the stimulus space to be (so, ). Take . We must show that . Let be in the interior of . Then, by construction, , so .

To prove the remaining containment, , let . As explained above, . Let . We consider two cases. If is not on a facet-defining hyperplane of , then is in the interior of some . It is straightforward to check that , so , and thus .

In the second case, is on exactly facet-defining hyperplanes of (for some ). Crossing exactly one such hyperplane means going from some region to some region (for some ), or vice-versa. Thus, a small neighborhood of intersects exactly regions : precisely those with

(3.6)

for some with .

Recall that is in the open set , so , and thus the interiors of the regions are contained in . So, all sets given in (3.6) are in the code . Thus, to show that , it suffices to show that . The containment follows from the fact that (as explained above). We prove by contradiction. Assume that there exists . Then we have:

Thus, , where , which contradicts the choice of . So, holds.

Proof of (2). Assume that is locally good. Let , and assume that is nonempty. We must show that is contractible. By Proposition 3.2, is contractible, so it is enough to show that . We will prove this by showing that the set deformation retracts to and also weak deformation retracts [10] to . It is straightforward to check that , and is a (full-dimensional) set that deformation retracts to . Thus, deformation retracts to . Finally, we obtain a weak deformation retract of to via the following homotopy: we simultaneously translate each facet-defining hyperplane of the simplex a small distance away from the simplex, so that the subset of that is “swept up” is pushed into . ∎

It is natural to ask whether a closed-set version of Proposition 3.9 holds:

Question 3.11.

Is every locally good code a closed-good-cover code?

We could try to resolve this question by “closing” our open-good-cover realizations; we ask, For a locally good code , is a closed-good-cover realization of , where is as in Proposition 3.9? However, this is not true. Indeed, it is easy to check that for the locally good code , the open realization , when we take closures, yields a code with the “extra” codeword 123.

Theorem 3.12.

A code is locally good if and only if it is a good-cover code.

Proof.

The backward direction is in Proposition 2.15. The forward direction follows from Proposition 3.9 and the fact that if is a good-cover code, then so is (Remark 2.5). ∎

Finally, note that the good-cover realizations from Proposition 3.9 are embedded in an -dimensional Euclidean space, where is the number of neurons. We ask, Is this embedding dimension, in general, strict?

4. Undecidability of the good-cover decision problem

Having shown that being locally good is equivalent to being a good-cover code (Theorem 3.12), we now prove that the corresponding decision problem is undecidable (Theorem 4.3). The proof hinges on the undecidability of determining whether a homology ball is contractible (Lemma 4.2[23].

We say that a code is -sparse if its simplicial complex has dimension at most :

Definition 4.1.

A code is -sparse if for all .

Lemma 4.2 (Tancer [23]).

The problem of deciding whether a given 4-dimensional simplicial complex is contractible is undecidable.

Theorem 4.3 (Undecidability of the good-cover decision problem).

The problem of deciding whether a 5-sparse code is locally good (or, equivalently, has a good cover) is undecidable.

Proof.

Given any 4-dimensional simplicial complex , consider the cone over on a new vertex . This cone is itself a simplicial complex, which we denote by . Let denote the neural code , which is 5-sparse. Note that , so the only codeword in that is missing from is . So, by Proposition 2.18, the code is locally good if and only if is contractible. Thus, any algorithm that could decide whether is locally good would also decide whether is contractible, which is impossible by Lemma 4.2. ∎

4.1. Decidability for 3-sparse and 4-sparse codes

Can the condition of 5-sparsity in Theorem 4.3 can be extended to 4-sparsity or even 3-sparsity? For 3-sparsity, the answer is “no”:

Proposition 4.4.

The problem of determining whether a 3-sparse code is locally good (or, equivalently, has a good cover) is decidable.

Proof.

It is straightforward to write an algorithm that, for a given code , enumerates the nonempty intersections of maximal codewords. By Proposition 2.18, is locally good if and only if the links of these intersections are all contractible. Hence, we are interested in the decision problem for determining whether these links are contractible. When is 3-sparse, these links are 2-sparse, i.e., (undirected) graphs. Contractible graphs are precisely trees (connected graphs without cycles), and the problem of determining whether a graph is a tree is decidable. ∎

As for extending Theorem 4.3 to 4-sparsity, this problem is open. This is because it is unknown whether the dimension in Lemma 4.2 can be lowered from 4 to 3 [23, Appendix A].

4.2. Relation to the convexity decision problem

For 2-sparse codes, being convex is equivalent to being locally good [13]. So, by Proposition 4.4, the convexity decision problem for these codes is decidable.

For codes without restriction on the sparsity, we revisit, from the proof of Theorem 4.3, codes of the form , where is a cone over a simplicial complex on a new vertex v. Consider the case when is contractible. Does it follow that is convex? If it were, then by an argument analogous to the proof of Theorem 4.3, the convexity decision problem would be undecidable. However, we will see in the next section that there exist such codes that are non-convex (Example 5.14). Indeed, the convexity decision problem is unresolved, so we pose it here.

Question 4.5.

Is the problem of determining whether a code is convex, decidable?

In the next section, we will introduce a superset of all convex codes, the “locally great” codes, and show that the corresponding decision problem is decidable (Theorem 5.18).

5. A new, stronger local obstruction to convexity

Recall that a code has no local obstructions if and only if it contains all its mandatory codewords (Proposition 2.18), and these mandatory codewords are precisely the faces of the simplicial complex whose link is non-contractible. We prove in this section that by replacing “non-contractible” by “non-collapsible” (Definition 5.1) we obtain a stronger type of local obstruction to convexity (Theorem 5.10).

This result yields a new family of codes that, like the earlier counterexample code (Proposition 2.16), are locally good, but not convex. This new family comprises codes of the form , where is a cone, on a new vertex , over a contractible but non-collapsible simplicial complex (see Example 5.14). Such a code (as we saw in the proof of Theorem 4.3) is missing only one codeword (namely, ) from its simplicial complex (because ). We will use this several times in this section.

5.1. Background on collapses

First we make note of a somewhat overloaded definition in the literature. The notion of a collapse of a simplicial complex was introduced by Whitehead in 1938 [28]. The more general concept of -collapse was introduced by Wegner in 1975 [26].

Definition 5.1.

Let be a simplicial complex, and let be the set of its facets.

  1. For any face of such that there is a unique for which , we define

    and say that is an elementary d-collapse of induced by (or ). This elementary -collapse is denoted by . (Here refers to the constraint dim , but in this work we will let be arbitrarily high when we use the term “-collapse”). A sequence of elementary -collapses

    is a d-collapse of to .

  2. An elementary collapse is an elementary -collapse induced by a face that is not a facet (i.e., ). A sequence of elementary collapses starting with and ending with is a collapse of to . Finally, a simplicial complex is collapsible if it collapses to a point (via some sequence).

Remark 5.2.

Following [23], our Definition 5.1(2) characterizes “collapsible” in terms of elementary collapses induced by pairs with . An equivalent definition, also used in the literature [1, 2], is via elementary collapses under a stronger condition: . For completeness, we prove in Appendix B that these two definitions of collapsible are equivalent (Proposition B.2).

Example 5.3 (A -collapse).

The following is an elementary -collapse defined by :

Example 5.4 (A collapse).

The following is a collapse to a point:

This collapse arises from the elementary collapses induced by, respectively, , , and .

Comparing Examples 5.3 and 5.4, note that the homotopy type was not preserved throughout the -collapse, but was preserved through the collapse. This is explained in the next two results, the first of which is well known (see, e.g., [2, Ch. 1.2] or [1]).

Lemma 5.5.

Elementary collapses preserve homotopy type. Thus, collapsible implies contractible.

However, not every contractible simplicial complex is collapsible. One example is any triangulation of the 2-dimensional topological space known as Bing’s house with two rooms [20, Ex. 1.6.1]. Another example is the dunce hat [30].

Lemma 5.5 extends as follows:

Lemma 5.6.

Let be a contractible simplicial complex, and let be an elementary -collapse induced by a face . Assume that contains a nonempty face. Then is contractible if and only if is not a facet (i.e., is an elementary collapse).

Proof.

The backward direction is immediate from Lemma 5.5. We prove the contrapositive of the forward direction. Suppose that is a facet of . Then , where denotes the the topological realization of as a facet of . With an eye toward using the Mayer-Vietoris sequence, we note that the intersection is the boundary of the simplex and thus is a sphere and has a non-vanishing homology group. Applying the Mayer-Vietoris sequence for homology to , and using the hypothesis that is contractible, we conclude that has a non-vanishing homology group, and therefore is not contractible. ∎

Finally, we state the following result of Wegner [26]; see also Tancer’s description in [22, §2].

Lemma 5.7 (Wegner [26]).

Let be a collection of nonempty convex (not necessarily open) sets in . Let be the nerve of . Then there exists an open halfspace in such that, letting denote the nerve of the collection , the following is an elementary -collapse: .

Informally speaking, Wegner proved Lemma 5.7 by sweeping a hyperplane from infinity across , deleting everything in its path, until an intersection region corresponding to a facet of has been removed. This yields the elementary -collapse induced by some face contained in .

Remark 5.8.

Recent work of Itskov, Kunin, and Rosen is similar in spirit to ours: they show that the collapsibility of a certain simplicial complex associated to a code (the “polar complex”) ensures that the code avoids certain obstructions to being a certain type of convex code (namely, a code arising from a “non-degenerate” hyperplane arrangement) [11, §6.5].

5.2. A key lemma

This subsection contains the key result (Lemma 5.9) that allows us to establish, via Theorem 5.10, our new local obstruction. Lemma 5.9 states that for an open cover by convex sets of a set that is itself convex, the corresponding nerve is collapsible. The original local obstruction (Proposition 2.15) relied on a weaker version of this result, which states only that such a nerve is contractible. Accordingly, the way we use Lemma 5.9 to prove Theorem 5.10 is analogous to how the authors of [8, 4] used the “contractible” version of the lemma to establish the original notion of local obstruction.

Lemma 5.9.

Let be a collection of convex open sets in such that their union is nonempty and convex. Then the nerve of is collapsible.

Proof.

Let denote the nerve of . Let denote the number of nonempty faces of . Note that , as the union of the ’s is nonempty. We proceed by induction on .

Base case: . Then is a point, and thus is collapsible.

Inductive step: . Assume that the lemma is true for all nerves with at most nonempty faces. First consider the case when has only one facet. Then is a simplex, and every simplex is collapsible.

Now consider the remaining case, when has at least two facets. Without loss of generality, each is nonempty (deleting ’s that are empty does not affect the union or the nerve). So, by Lemma 5.7, there exists an open halfspace such that is an elementary -collapse, where is the nerve of . We see that is a collection of convex open sets whose union, , is convex (because both and are convex) and nonempty (indeed, the nerve is nonempty, because has at least two facets and so at least one was unaffected by the elementary -collapse ). Thus, by the induction hypothesis, is collapsible.

Thus, by definition of collapsible, we need only show that the elementary -collapse was in fact an elementary collapse. To see this, note that the nerve theorem (Proposition 2.13) implies that the nerve is homotopy equivalent to , which we saw above is convex (and nonempty) and thus contractible. Hence, by Lemma 5.6, is an elementary collapse. This completes the proof. ∎

5.3. Locally great codes

The following result gives a new class of local obstructions.

Theorem 5.10.

Let be a convex code. Then for any , the link is collapsible.

Proof.

Let be convex open sets (in some stimulus space ) that realize . Let . Then . Thus, is a collection of convex open sets such that their union equals (and thus this union is nonempty and convex). So, by Lemma 5.9, the nerve of is collapsible. It is straightforward to check that this nerve equals . So, as desired, the link is collapsible. ∎

Definition 5.11.

A code has a local obstruction of the second kind if there exists such that the link is not collapsible. If has no local obstructions of the second kind, is locally great.

The next result follows from Theorem 5.10 and the fact that collapsible implies contractible (Lemma 5.5).

Corollary 5.12.

is convex is locally great is locally good.

Neither implication in Corollary 5.12 is an equivalence, as we see in the following examples.

Example 5.13 (Locally great does not imply convex).

The counterexample code from Proposition 2.16 is non-convex [15], but, we claim, locally great. To verify this, we must check that for every missing codeword , the corresponding link is collapsible. We accomplish this as follows:

  • When , the link is a point and thus collapsible.

  • When , the link is a single edge and thus collapsible (cf. Example 5.4).

  • When , the link is, up to relabeling, the simplicial complex in Example 5.4, which we showed is collapsible.

Example 5.14 (Locally good does not imply locally great).

Let be a triangulation of Bing’s house, so is a 2-dimensional simplicial complex that is contractible but not collapsible [20, Ex. 1.6.1]. Let be a cone over on a new vertex , and consider the code .

We claim that is locally good, but not locally great. Indeed, the only “missing” codeword of is , and its link is , which is contractible but not collapsible. Thus, is locally good (by Proposition 2.18), but not locally great (by definition).

We now consider the question of whether there might be an even stronger local obstruction beyond “locally great” (perhaps “locally excellent”?). That is, can the conclusion of Lemma 5.9 be strengthened from “collapsible” to some more general property of simplicial complexes? Or, on the contrary, does the converse of Lemma 5.9 hold? Accordingly, we pose the following question:

Question 5.15.

Is every collapsible simplicial complex the nerve of a convex open cover of some convex set?

An answer to Question 5.15 in the affirmative would mean that there is not a notion of “locally excellent”.

Remark 5.16.

Some decision problems related to Question 5.15 have been resolved. The problem of deciding whether a simplicial complex is the nerve of a convex open cover of some (not necessarily convex) subset of is decidable [25, 22]. On the other hand, the problem of deciding whether a simplicial complex is the nerve of a good cover of some subset of , when , is undecidable [24].

5.4. The locally-great decision problem

In this subsection, we prove that the locally-great decision problem is decidable. This result relies on the following lemma.

Lemma 5.17 (Tancer [23]).

The problem of deciding whether a given simplicial complex is collapsible is NP-complete.

Theorem 5.18 (Decidability of the locally-great decision problem).

The problem of deciding whether a code is locally great is NP-hard.

Proof.

By Definition 5.11 and Lemma 5.17, the following steps form an algorithm that determines whether a code is locally great: enumerate and the corresponding links, and then check whether any of these links is collapsible.

To show this is NP-hard, it suffices to reduce (in polynomial time) the problem of deciding whether a simplicial complex is collapsible (which is NP-complete by Lemma 5.17) to the locally-great decision problem. To this end, we proceed as in the proof of Theorem 4.3: given any simplicial complex , consider the cone over on a new vertex . This cone is itself a simplicial complex, which we denote by . Let . Then, is the only codeword in that is missing from . So, by Definition 5.11, the original simplicial complex is collapsible if and only if the code is locally great. ∎

6. Discussion

We return to the question that opened this work, Which neural codes are convex? There is a growing literature tackling this question, and here we resolved some foundational problems in this developing theory. In summary, we now know the following:

is convex is locally great