    # Hyperpaths

Hypertrees are high-dimensional counterparts of graph theoretic trees. They have attracted a great deal of attention by various investigators. Here we introduce and study Hyperpaths – a particular class of hypertrees which are high dimensional analogs of paths in graph theory. A d-dimensional hyperpath is a d-dimensional hypertree in which every (d-1)-dimensional face is contained in at most (d+1) faces of dimension d. We introduce a possibly infinite family of hyperpaths for every dimension, and investigate its properties in greater depth for dimension d=2.

## Code Repositories

### ligra

Ligra: A Lightweight Graph Processing Framework for Shared Memory

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Hypertrees were defined in 1983 by Kalai [kalai1983enumeration]. An -vertex -dimensional hypertree is a -acyclic -dimensional simplicial complex with a full -dimensional skeleton and faces of dimension . Note that when this coincides with the usual notion of a tree in graph theory. Also note that a hypertree is completely specified by its list of -dimensional faces. There is already a sizable literature, e.g., [linial2019enumeration, linial2016phase] dealing with hypertrees, but many basic questions in this area are still open. Also, in order to develop an intuition for these complexes it is desirable to have a large supply of different constructions. Many investigations in this area are done with an eye to the one-dimensional situation. Arguably the simplest-to-describe (-dimensional) trees are stars and paths. These two families of trees are also the two extreme examples with respect to various natural graph properties and parameters such as the tree’s diameter. Hyperstars are very easy to describe in any dimension . Namely, we pick a vertex and put a -face in iff . On the other hand, it is much less obvious how to define -dimensional paths. A one-dimensional path is a tree in which every vertex has degree at most . Working by analogy we can define a -dimensional hyperpath as a -dimensional hypertree in which every -dimensional face is contained in no more than faces of dimension . We include a summary of the main results presented in this paper:

1. We introduce an infinite family of -dimensional algebraically-defined simplicial complexes. In dimension we analyzed fairly large (up to ) such complexes most of which turned out to be -dimensional hyperpaths. To this end we devised a new fast algorithm that determines whether a matrix with circulant blocks is invertible.

2. Negative results: We showed that infinitely many of the -dimensional complexes discussed in Item 1 are not -acyclic.

3. We develop several approaches for proving positive results and finding an infinite family of -dimensional hyperpaths.

###### Note 1.1.

The necessary background in simplicial combinatorics and in number theory are introduced in section 2.

###### Definition 1.2.

Let be the field of prime order . For and an integer, we define the complex on vertex set . It has a full -dimensional skeleton, and is a -face in iff . 111Throughout this paper, unless stated otherwise, given a prime , all arithmetic equations are , and we often replace the congruence relation by an equality sign when no confusion is possible.

Ours is by no means the only sensible definition of a hyperpath. An alternative approach is described in [mathew2015boundaries]. That paper starts from the observation that a (-dimensional) path is characterised as a tree that can be made a spanning cycle by adding a single edge. In this view they define a Hamiltonian -cycle as a simple -cycle of size . A Hamiltonian -dimensional hyperpath is defined as a -dimensional hypertree which can be made a Hamiltonian -cycle by adding a single -face. Other possibilities suggest themselves. For example, when an edge is added to a tree a single cycle is created. One may wonder how the length of this cycle is distributed when the added edge is chosen randomly. A path is characterized as the tree for which the average of this length is maximized. Similar notions clearly make sense also for . These various definitions coincide when but disagree for . It would be interesting to understand the relations between these different definitions. Note that sum complexes [linial2010sum] as well as certain hypertrees from [linial2019enumeration] are hyperpaths according to our definition.

### Running Example

We repeatedly return throughout the paper to the example corresponding to . Some of the -faces in are since . As the next claim shows the number of -faces in is (out of the total of -faces).   Given the vertex set of a -face as in definition 1.2, and if , the coordinate that is multiplied by is uniquely defined. For if , then . This is impossible, since we are assuming that all are distinct, and .

###### Claim 1.3.

For an integer , a prime and , if then has exactly -faces. If then has exactly -faces.

###### Proof.

By induction on . Let us start with . If , then for every , there is a unique s.t. . Also, , since by assumption . This yields edges, unless in which case every edge is counted twice with a total of different edges. When , the complex has edges, namely, for all . We proceed to deal with ,

• If , then is a -face iff . By the induction hypothesis with and dimension there are exactly such different choices of . For every such choice of there are choices for , namely, any value not in , yielding a total of

 1d(n−1d−1)⋅(n−d)=(n−1d)

distinct -faces.

• If , for each of the -faces there is a unique satisfying . This gives a face, unless . By reordering the if necessary , which yields

 d−2∑i=0xi+(c+1)⋅xd−1=0

Since , we know that and we can apply the induction hypothesis for dimension to obtain such different choices of . All told there are

 (nd)−(n−1d−1)=(n−1d)

distinct -faces in . If has no special role and we over-count by a multiple of , hence we get only distinct -faces.

Henceforth to simplify matters we assume . In the -dimensional case the resulting graph has and . Consequently is the union of circles of length . We will later see that plays a crucial rule in determining whether is a hypertree. We note that if is a hypertree, then it is a hyperpath, since every -face in is contained in at most of its -faces. Indeed, let be the vertex that is added to to form a -face. Then either or there is an index such that .

### Running Example

The edge in the complex is included in the faces and with the convention that the last vertex is multiplied by . In contrast is included in only two faces, namely and . The equality yields the non-face .   Since we focus mostly on the -dimensional case , we use the shorthand . The boundary operator of is given by an matrix which we denote by . Clearly is a hyperpath iff has a full column rank, and indeed our main technical question is: Figure 1: Data on X=X2,n,c for all primes 11≤n≤59. A yellow entry means that X is a hypertree. White entries show the (positive) co-dimension of the column space of A=An,c. Red indicates an illegal c, i.e., c≡−2 or c≥n.
###### Question 1.4.

For which primes and is a hypertree?

Figure 1 shows the answer for Question 1.4 for all primes and each appropriate . Figure 3 shows the fraction of eligible for which is a hypertree for all primes . The paper is structured as follows: in Section 2 we discuss some preliminary facts and outline the necessary background in number theory and simplicial combinatorics. It turns out that the problem whether the complex is acyclic reduces to the question whether a certain matrix with a special structure is invertible. We study this special structure in Section 3, explain the reduction and give a new fast algorithm that determines if a matrix of this kind is invertible. In Section 4 we further investigate this reduction. This allows us to exhibit in Section 5 an infinite family of non-acyclic -dimensional complexes. We conjecture that a certain simple criterion asymptotically determines if a complex is acyclic or not. Section 6 is devoted to another approach in search of an infinite family of -dimensional hyperpaths.

## 2 Preliminaries

Many matrices are defined throughout this paper. They are marked throughout by hyperlinks that can return the reader to their definitions. In Appendix A we collect the basic properties of these matrices and their mutual relations.

### 2.1 Some relevant number theory

For a prime , we denote by the field with elements. Addition and multiplication are done mod . The multiplicative group of is comprised of the set . It is a cyclic group isomorphic to . The order, of is the smallest positive integer s.t. . The following easy lemma gives the orders of ’s powers:

###### Lemma 2.1.

If is prime and , then for every integer

 o(xj)=o(x)gcd(j,o(x))
###### Proof.

By definition, is the smallest positive integer s.t. . This exponent must be divisible by , and is therefore the least common multiple of and . Consequently

 j⋅l=lcm(j,o(x))=j⋅o(x)gcd(j,o(x))

as claimed. ∎

Recall Euler’s totient function . Namely, is the number of integers in that are co-prime with . It is also the order of the multiplicative group mod . Clearly is a generator of iff . By the above comments, has exactly generators. We write logarithms w.r.t. some fixed generator of . I.e., is the unique for which .

### 2.2 Background on simplicial combinatorics

We follows the setup in Chapter 2 of [linial2019extremal]. All simplicial complexes considered here have vertex set . A simplicial complex X is a collection of subsets of V that is closed under taking subsets. Namely, if and , then as well. Members of are called faces or simplices. The dimension of the simplex is defined as . A -dimensional simplex is also called a -simplex or a -face for short. The dimension is defined as over all faces . The size of a -complex is the number of its -faces. The collection of the faces of dimension of , where , is called the -skeleton of . We say that a -complex has a full skeleton if its -skeleton contains all the faces of dimensions at most spanned by its vertex set. The permutations on the vertices of a face are split in two orientations of , according to the permutation’s sign. The boundary operator maps an oriented -simplex to the formal sum

 ∂d(σ)=d∑i=0(−1)i(σ∖vi)

where is an oriented -simplex. We linearly extend the boundary operator to free -sums of simplices. We consider the matrix form of by choosing arbitrary orientations for -simplices and -simplices. Note that changing the orientation of a -simplex (resp. -simplex) results in multiplying the corresponding column (resp. row) by . Thus the -boundary of a weighted sum of

-simplices, viewed as a vector

(of weights) of dimension , is just the matrix-vector product . A simple observation shows that the matrix has rank . We denote by the submatrix of restricted to the columns associated with -faces of a -complex . We define to be . The rational -homology of , denoted by , is the right kernel of the matrix . Elements of are called -cycles. A -hypertree over is a -complex of size with a trivial rational -dimensional homology. This means that the columns of the matrix form a basis for the column space of .

## 3 Matrices with Circulant Blocks (MCB)

It turns out that is closely related to a block matrix whose blocks are circulant matrices. So, we start our work on Question 1.4 by deriving a structure theorem for such matrices. This is what we do in the present section. Recall that a circulant matrix has the following form

 C=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝c0cr−1c1c1c0c2⋱⋱cr−1c1c0⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

Equivalently,

 C=g(P)=c0⋅I+c1⋅P+c2⋅P2+⋯+cr−1⋅Pr−1 (3.1)

where is the cyclic permutation matrix

 P=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝000110001⋱00010⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠ (3.2)

Given positive integers , we denote by (for Matrices with Circulant Blocks) the set of all matrices of the form

 E=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝C0,0C0,1C0,t−1C1,0C1,1⋮Ct−1,0Ct−1,t−1.⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (3.3)

where each is a circulant matrix. When is omitted, the matrices are over . This is not to be confused with the well-studied class of Circulant Block Matrices (CBM) [davis2013circulant]. Such a matrix is circulant as a block matrix, but its blocks need not be circulants. We can clearly express as follows

 E(P)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝g0,0(P)g0,1(P)g0,t−1(P)g1,0(P)g1,1(P)⋮gt−1,0(P)gt−1,t−1(P)⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (3.4)

where are polynomials of degree less than as in Equation (3.1). Since , we can view as elements of the quotient polynomial ring

 R:=Q[P]/(Pr−1) (3.5)

Likewise, we think of as a member in the matrix ring . Associated with every is a scalar complex matrix , viz.,

 E––(z)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝g0,0(z)g0,1(z)g0,t−1(z)g1,0(z)g1,1(z)⋮gt−1,0(z)gt−1,t−1(z),⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (3.6)
###### Theorem 3.1.

A matrix is singular is singular for some . Here is the primitive -th root of unity.

###### Proof.

The proof of Theorem 3.1 uses the next claim:

###### Claim 3.2.

Every is similar to a block diagonal matrix with blocks, i.e.,

 X⋅E⋅X−1=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝E––(ωrr)000E––(ωr−1r)0⋱00E––(ω1r)⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

for some invertible matrix

.

###### Proof.

We recall the order- Discrete Fourier Transfrom (DFT) Matrix whose entries are , where is a primitive -th root of unity. It diagonalizes circulant matrices as follows:

###### Lemma 3.3.

Let be the order- DFT matrix. If is an circulant matrix, then

 Fr⋅C⋅F−1r=Λ

where is the diagonal matrix whose entries are

’s eigenvalues,

 λj=c0+cr−1ωjr+cr−2ω2jr+⋯+c1ω(r−1)jr     for     0≤j≤r−1.

Here is ’s first column.

###### Proof.

is an eigenvector of

with corresponding eigenvalue as

 (C⋅vj)[i]= =(ci+ci−1ωjr+ci−2ω2jr+⋯+c0ωijr+cr−1ω(i+1)jr+⋯+ci+1ω(r−1)jr)= =(c0+cr−1ωjr+cr−2ω2jr+⋯+c1ω(r−1)jr)⋅ωijr=λjωijr

The claim follows since

Set as a block diagonal matrix with on the diagonal. Since , Lemma 3.3 yields

 L⋅E⋅L−1=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝Λ0,0Λ0,1Λ0,t−1Λ1,0Λ1,1⋮Λt−1,0Λt−1,t−1⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

where is a diagonal matrix. Here is the first column vector of the circulant matrix . This matrix has rows and columns which we enumerate from to . It is made up of blocks, with of them at every layer. This suggests that indices in this matrix be written as for some and , which we interpret as index within block number . We rearrange the matrix to be made up of blocks, with of them at every layer, with indices in the form where and . Thus the mapping

 φ:αr+β→βt+α (3.7)

is a permutation which we apply to the rows and columns of . Since all blocks in are diagonal, the entry in position is nonzero only if . Following the application of , the matrix becomes an diagonal matrix of blocks.

 Q⋅L⋅E⋅L−1⋅Q−1=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝Δ0000Δ10⋱00Δr−1⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

where is the permutation matrix of . The matrix thus becomes an block matrix, with block size , and

 Δi[k,l]=Λk,l[i,i]

since the mapping (3.7) sends

 row number k⋅r+i→row number i⋅r+k column number l⋅r+i→column number i⋅r+l

To complete the proof of Claim 3.2, setting , it only remains to show that :

 Δi[k,l]=Λk,l[i,i]=(Fr⋅ck,l)[i]= =r−1∑j=0Fr[i,j]ck,l[j]=r−1∑j=0ck,l[j]⋅ω−ijr=r−1∑j=0ck,l[j]⋅(ω−ir)j= =gk,l(ω−ir)

For , . Claim 3.2 yields one part of Theorem 3.1. Namely, that if is singular for some and some , then is singular.
In order to prove the other direction of Theorem 3.1, we need the following two Lemmas. Recall from Equation (3.5) that for a matrix in , is a polynomial matrix in over the quotient polynomial ring . Its determinant is a polynomial in , and we denote by the evaluation of this polynomial at the complex number .

for every .

###### Proof.
 det(E(P))(ωjr)=⎛⎝∑σ∈St(t∏i=0gi,σ(i)(P))⎞⎠(ωjr)= =∑σ∈St(t∏i=0gi,σ(i)(P))(ωjr)(a)=∑σ∈Stt∏i=0gi,σ(i)(P)(ωjr)= =∑σ∈Stt∏i=0gi,σ(i)(ωjr)=det(E––(ωjr))

Equality () holds, because

 Pr=(ωjr)r=1

The next lemma appears without proof in [rjasanow1994effective]. We provide a proof, since we could not find it in the literature:

###### Lemma 3.5.

The non-singular matrices in form a group w.r.t. matrix multiplication.

###### Proof.

Clearly is closed under product, since the product of two circulant matrices is circulant, and matrix multiplication respects block product. We only need to show closure under inverse for invertible matrices in . As mentioned above, and are in one-to-one correspondence. An inverse of as in Equation (3.4) in is an inverse under this bijection of in , so it remains to prove that if is invertible, then has an inverse in . The determinant of a matrix over a commutative ring is defined as usual as the alternating sum of products over permutations. Such a matrix has an inverse iff its determinant is invertible, as an element of the underlying ring. The proof of this fact (see e.g., [mcdonald1984linear]) goes by establishing the Cauchy–Binet formula for matrices over commutative rings. We apply this to the commutative polynomial ring , and conclude that has an inverse in iff its determinant (which is also a polynomial in ) is invertible. To prove the Lemma, let be invertible. Recall the definitions of in (3.4) and in (3.6). By Claim 3.2 and since is invertible, we obtain:

 ∀jdet(E––(ωjr))≠0

using Lemma 3.4 this translates into

 ∀jdet(E(P))(ωjr)≠0

thus and do not share any root and

 gcd(det(E(P)),Pr−1)=1

so the determinant is invertible, finishing the proof of the lemma. ∎

With Lemmas 3.4 and 3.5 we can complete the proof of Theorem 5.3. Let be singular. By Lemma 3.5 has no inverse in , implying that has no inverse in . Consequently, is not invertible and thus and have a non-trivial common divisor. But

 Pr−1=∏k|rΨk(P)

where

 Ψk(P)=∏1≤l≤kgcd(l,k)=1(P−ωlk)

is the -th cyclotomic polynomial. It is a well known fact that is irreducuble over (e.g., [gauss2006untersuchungen]). Therefore, and have a non-trivial common divisor iff one of the cyclotomic polynomials divides the determinant. i.e., there exists a divisor s.t.

 Ψk(P)∣∣det(E(P))

Since there exists a divisor of , s.t.

 det(E(P))(ωk)=0

By Lemma 3.4 this implies

 det(E––(ωk))=0

completing the proof of Theorem 3.1. ∎

### 3.1 Computational Aspects of MCB

Theorem 3.1 has interesting computational aspects. In order to present them, we need some preparations. We recall that denotes the number of distinct divisors of the integer , and that , more precisely (see [hardy1979introduction])

 limsupn→∞logd(n)logn/loglogn=log2
###### Note 3.6.

It is a classical fact (e.g., [petkovic2009generalized]) that matrix multiplication and matrix inversion have essentially the same computational complexity. It is, however, still unknown if these problems are also equivalent to the decision problem whether a given matrix is invertible (e.g., [Blaser_matrix_survey]).

The smallest exponent for matrix multiplication is commonly denoted by . This is the least real number such that two matrices can be multiplied using arithmetic operations for every . Presently the best known bounds [le2014powers] are .

###### Proposition 3.7.

For every , it is possible to determine in time

 O(r1+ϵ⋅t2+rϵ⋅tω)

whether a matrix in is invertible.

###### Note 3.8.

It follows that when , it is easier to decide the invertibility of matrices in than general matrices, because

 (rt)ω≫r1+ϵ⋅t2+rϵ⋅tω

There is an obvious lower bound of , which is the time it takes to read a matrix in .

###### Proof.

(Proposition 3.7) The proof of theorem 3.1 yields an algorithm to decide if is invertible:

1. Produce the matrix as in equation (3.4).

2. For each divisor of :

1. [label*=0.]

2. Calculate the matrix as in equation (3.6) by evaluating the polynomial matrix with .

3. Determine if the matrix is invertible. If it is singular, return ’ is singular’.

3. If has full rank for every divisor , return ’ is invertible’.

A circulant block is clearly completely defined by its first row. Therefore the matrix can be found in time . To find all the divisors of we can even factor using Eratosthenes’ sieve with no harm to the complexity. Step 2 is repeated times. Horner’s Rule allows to evaluate a degree polynomial with additions and multiplications, so step 2a takes time . The running time of step 2b is at most . All told the combined running time is

 O(r⋅t2+r+d(r)⋅(r⋅t2+tω))=O(r1+ϵ⋅t2+rϵ⋅tω)

for every . ∎

In step 2 we need to check whether is invertible for different . These calculations can clearly be done in parallel. In [tsitsas2007recursive] an iterative algorithm is presented to invert a matrix in , with run time

 {O(23l⋅r+t2⋅r2)r~{}is not a power of~{}2O(23l⋅r+t2⋅rlogr)r~{}is a power of~{}2 (3.8)

where . If we only need to decide whether the matrix is invertible, then the algorithm in Corollary 3.7 is faster. We still do not know whether Theorem 3.1 yields an algorithm to invert a matrix in that is faster then the algorithm from [tsitsas2007recursive].

### 3.2 From A to Mcb.

It turns out that there is a rank-preserving transformation of the boundary operator as defined in section 2.2 into a matrix in as defined in (3.3). The transformation is fairly simple and only involves reordering of the rows and columns plus removal of rows that are linearly dependent on the other rows and a trivial Gauss elimination of rows and columns. As mentioned in section 2.2, and maintaining the same terminology, the boundary operator of is given by an matrix. We next find a square submatrix of of rank . To this end, we remove rows of which are linearly spanned by the other rows. Rows in are indexed by edges (-dimensional faces). It is well known and easy to prove that this is the case with any rows that represent the edge set of a spanning tree of the complete graph . We apply this with the star rooted at vertex . In other words, we remove the rows corresponding to pairs . Nonzero elements of act on subsets of . Namely, if , and we denote:

 u⋅σ=u⋅{x0,…,xk}:={u⋅x0,…,u⋅xk}.

We note that is closed under such action, and linearly extend this definition as . We organize ’s rows and columns by blocks corresponding to the orbits under the action of . A simple calculation shows that ’s -faces form orbits of size , and one orbit of size . The latter is comprised of all -faces of the form . The -faces (edges) also form orbits of size , and one orbit of size that includes the edges . For , and , the block called is characterized by having the edge as a row and the -face as a column. We refer to and as the row and column leaders of . This creates some ambiguity, since and belong to the same block. Between and the leader is the one with the smaller logarithm (as defined in Section 2). The same ambiguity and the way around it apply as well to and . We order the rows of the block indexed by as follows:

 [(1,x),λ⋅(1,x),λ2⋅(1,x),…,λn−2⋅(1,x)]. (3.9)

Likewise, the columns in a block whose column index is are ordered as follows:

 [(1,y,z),λ⋅(1,y,z),λ2⋅(1,y,z),…,λn−2⋅(1,y,z)] (3.10)

where . Equations (3.9) and (3.10) also represent the orientation that we use for the edges and -faces, indicated by the use of tuples over sets. Note that the column of a -face has a single non-zero entry in row , since we have removed the rows that correspond to the star centered at vertex . We eliminate these rows and columns by (trivial) Gauss elimination as in [aronshtam2013collapsibility] and arrive at

 S=Sn,c (3.11)

an submatrix of . To recap, is a hypertree iff is non-singular.

###### Claim 3.9.

Every block of is circulant, i.e., .

###### Proof.

The boundary operator is a signed inclusion matrix where the column that corresponds to the oriented face is

 e(u,v)−e(u,w)+e(v,w)

Where for an oriented edge we define to be the column vector with a single 1 in position . If the direction is opposite, i.e. the edge is present and not , then . Note that the column that corresponds to the oriented face is

 S[:,(1,y,z)]=e(1,y)−e(1,z)+e(y,z) (3.12)

while the column that corresponds to the oriented face in the same block
is

 S[:,λk⋅(1,y,z)]=eλk⋅(1,y)−eλk⋅