Let be the unique finite field with elements, where is a prime power. The projective space is the collection of all subspaces of , the finite vector space of dimension over . In terms of notation,
where denotes the usual vector space inclusion. The Grassmannian of dimension , denoted by , for all nonnegative integers is defined as the set of all -dimensional subspaces of , i.e. . Thus . The subspace distance, defined as
for all , is a metric for KK ; AAK , where denotes the smallest subspace containing both and . This turns both and into metric spaces. An code in is a subset of the projective space with size such that for all . The parameters and are called the length and minimum distance of the code, respectively. A code in a projective space is commonly referred to as a subspace code. A subspace code is called a constant dimension code with fixed dimension if for all . In other words, for some if is a constant dimension code. Koetter and Kschischang proved that in random network coding, a subspace code with minimum distance can correct any combination of errors and erasures introduced anywhere in the network if KK . This development triggered interest in codes in projective spaces in recent times EV ; SE ; SE2 ; HKK ; GY ; XF ; TMBR ; KoK ; ER ; BP ; GR .
We denote the collection of all subsets of the canonical -set as , commonly known as the power set of . The authors of BEV proved that codes in projective spaces can be viewed as the -analog of binary block codes in Hamming spaces using the framework of lattices. A lattice is a partially ordered set wherein any two elements have a least upper bound and a greatest lower bound existing within the set. Block codes in correspond to the power set lattice while subspace codes in correspond to the projective lattice . Here signifies set inclusion. Braun et al. generalized a few properties of binary block codes to subspace codes including that of linearity BEV . Linear codes in Hamming spaces find huge application in designing error correcting codes due to their structure MS ; DR . An linear block code in is precisely a -dimensional subspace of , where is the dimension of such a code.
The notion of “linearity” in a projective space, however, is not straightforward. This stems from projective spaces not exhibiting vector space structure unlike Hamming spaces. In particular, is a vector space with respect to the bitwise XOR-operation whereas is not a vector space under the usual vector space addition. Braun et al. solved this problem in BEV by assigning a vector space-like structure to a subset of .
The rate of a linear code, i.e. the ratio of its dimension to length, is proportional to the size of the code. It is natural to ask how large a linear code in can be. Braun et al. conjectured the following in BEV :
The maximum size of a linear code in is .
Special cases of Conjecture 1.1 have been proved before BK ; PS . We proved the conjecture in BK under the additional assumption of the codes being closed under intersection. In this paper, we bring out the lattice structure of linear subspace codes closed under intersection. In particular, we show that linear subspace codes are sublattices of the projective lattice if and only if they are closed under intersection. Moreover, these sublattices are geometric distributive. We then go on to use the lattice-theoretic characterization of this particular class of linear codes to give an alternative proof of the conjectured bound for them.
The rest of the paper is organized as follows. In Section 2 we give the formal definition of a linear code in a projective space and some relevant definitions from lattice theory. Several properties of linear subspace codes are derived that highlight the -analog structure of a binary linear block code. The Union-Intersection theorem is stated and proved in Section 3. As a consequence, we show the lattice structure of linear codes closed under intersection. We introduce the notion of pairwise disjoint codewords in linear subspace codes and establish some properties to show their linear independence in Section 4. Section 5 is devoted to proving that the sublattice of the projective lattice formed by a linear code closed under intersection is geometric distributive. The proof uses the notion of indecomposable codewords which are particular cases of pairwise disjoint codewords. We then use the lattice-theoretic characterization to give an alternative proof of the maximal size of linear codes closed under intersection. Section 6 contains a few open problems for future research.
denotes the finite vector space of dimension over . The set of all subspaces of is denoted by . The usual vector space sum of two subspaces and when , also known as the direct sum of and , will be written as . For a binary vector or word of length , the support of , denoted as , will indicate the set of nonzero coordinates of . In other words, . The support of a binary vector identifies it completely. The all-zero vector and the empty set will be denoted as and , respectively. For two binary words and , the union and intersection of and , denoted as and respectively, are defined via
The coordinatewise modulo-2 addition, alternatively known as the binary vector addition, of two binary words and is denoted by . By definition, . Here denotes the symmetric difference operator, defined for sets and as
2 Definitions and Relevant Background
2.1 Linear Codes in Projective Spaces
A linear code in the projective space is defined as follows BEV :
A subset , with , is a linear subspace code if there exists a function such that:
(i) is an abelian group;
(ii) the identity element of is ;
(iii) for every group element ;
(iv) the addition operation is isometric, i.e., for all
A subset of , together with corresponding operation, is called a quasi-linear code if it satisfies only the first three conditions in the above definition. Conditions (i)-(iii) in Definition 1 ensure that a quasi-linear code is a vector space over . Braun et al. proved the following about the size of a quasi-linear code in BEV .
(BEV , Proposition 2) A subset , with , is a quasi-linear code if and only if is a power of 2.
A set of codewords in a linear subspace code will be said to be linearly independent
if the members of the set are linearly independent vectors in the vector space formed by the code over.
A quasi-linear code is linear when translation invariance is imposed on its structure, as indicated by condition (iv) in Definition 1. The linear addition thus becomes isometric and obeys certain properties. We list and prove these as lemmas, the first three of which are essentially reproduced from BEV .
(BEV , Lemma 6) Let be a linear code in and let be the isometric linear addition on . Then for all , we have:
In particular, if , then .
By the definition of linear code, . From the definition of the result follows.
(BEV , Lemma 7) For any three subspaces and of a linear code in with isometric linear addition , the condition implies .
From the definition of linearity in , we have .
The statement of the next lemma is altered from what was presented in BEV as per our requirement.
(BEV , Lemma 8) Let be a linear code in and let be the isometric linear addition on . If and are any two codewords of such that , then . Also .
From the definition of linearity, we have and using the fact , we also have from Lemma 2 that . Combining both, we obtain , which implies , i.e., . Similarly, . This means . Finally, as , , which proves the lemma.
The next lemma, which plays a pivotal role in our work, records some useful properties of the dimension of codewords in a linear subspace code.
If is a linear subspace code and , then
The dimension of the sum of two codewords in a linear subspace code is bounded from below, as shown next.
Let be a linear subspace code. For all the following is true:
with equality if and only if .
As , we must have and equality occurs only when , i.e., when . Lemma 2 then implies that
Equality occurs if and only if , i.e. if and only if .
Let be a linear subspace code and be two distinct nontrivial codewords of . Then,
if and only if .
if and only if .
(Proof of ) By Lemma 2, . If , then and we have, . On the other hand, by Lemma 5 and the fact that , we have . Thus , which proves that . However, if , then by Definition 1, , a contradiction, hence proved.
(Proof of ) Write . If , then by logic similar to that presented above, . As , the result follows.
(b) (Proof of ) By Definition 1, , and using Lemma 2 we get
2.2 An Overview of Lattices
This section serves as a brief introduction to lattices. We will give a few basic definitions that can be found in B . The notation and terminology used here are standard.
A partially ordered set or poset is a set in which a binary relation is defined which satisfies the following conditions for all :
(Reflexivity) For all , .
(Antisymmetry) If , then .
(Transitivity) If , then .
The binary relation in a poset is also called the order relation for the poset. We will henceforth denote a poset by and assume as its order relation. If and , we will write and say that is “less than” or “properly contained in” . If and there exists no such that , then is said to cover the element ; we denote this as .
An upper bound of a subset of is an element such that for all . Similarly, the lower bound of a subset of is an element satisfying for every . The least upper bound (greatest lower bound) of is the element of contained in (containing) every upper bound (lower bound) of .
A least upper bound of a poset, if it exists, is unique according to the antisymmetry property of the order relation (, Definition 2). Same holds for a greatest lower bound of a poset. We will use notations and for the least upper bound and greatest lower bound of a poset , respectively.
A lattice is a poset such that for any , and exist. The is denoted as and read as “ meet ”, while the is denoted as and read as “ join ”. The lattice is denoted as . The unique least upper bound (greatest lower bound) of the whole lattice , if it exists, is called the greatest (least) element of .
All the lattices considered in this work are finite and contain a unique greatest element denoted as and a unique least element denoted as .
A sublattice of a lattice is a subset of such that for all it follows that .
A sublattice is a lattice in its own right with the same meet and join operations as that of the lattice. However, not all subsets of a lattice are sublattices.
A lattice is distributive if any of the following two equivalent conditions holds for all :
A lattice is modular if for all such that , we have .
Not all lattices are distributive. If a lattice is distributive then the modularity condition automatically holds. Thus all distributive lattices are modular. However, the opposite is not true as will be illustrated later. In a lattice , an element is called an atom if and only if . Atoms play a significant role in defining lattices that are geometric.
A finite lattice is geometric if it is modular and every element in the lattice is a join of atoms. If a geometric lattice is distributive then it is called geometric distributive.
A set of elements in a lattice is called a chain if for all . The length of this chain is . The height of a geometric lattice is the length of a maximal chain between its greatest and least elements.
Not all modular or distributive lattices are geometric. We will next discuss an example of a geometric lattice that is not distributive.
Recall that the projective space represents the set of all subspaces of , the finite vector space of dimension over . It is straightforward to verify that is a poset where the order relation is the usual subspace inclusion . The entire projective space is a lattice under this order relation. The join of two elements and is therefore the smallest subspace containing both and . Similarly the meet of and becomes the largest subspace contained in both and . Thus, in this lattice, the meet and join operations are defined as: for all . The greatest and least elements for this lattice are the ambient space and the null space , respectively. The atoms in are precisely the one dimensional vector spaces of , i.e., the set of atoms is . As for all such that , this lattice is modular. This, together with the fact that any element in the projective space is a union (vector space sum) of one dimensional subspaces, implies that the lattice is geometric. However, we do not have for all subspaces of in general. Thus, the lattice is not distributive.
We refer to the lattice as projective lattice. Recall that any linear subspace code in is a subset of — which, according to Definition 5, is not sufficient to guarantee a lattice structure. It is therefore natural to ask what additional condition(s) a linear code in a projective space should satisfy in order to assume a sublattice structure of the corresponding projective lattice. We investigate this problem in the following section.
3 The Union-Intersection Theorem
We introduced the terms union and intersection of two codewords in a Hamming space in Section 1. The corresponding notions for linear codes in a projective space is straightforward: The union of two codewords and is , while their intersection is . Observe that for any two codewords and in a linear code , , which proves that
Thus the union and intersection of any two codewords in a classical binary linear code must coexist within the code according to (2). Moreover, , i.e. . We now prove that equivalent relations hold for linear codes in a projective space.
Theorem 8 (Union-Intersection Theorem)
Let be a linear subspace code. If and are two codewords in then,
Furthermore, if then , and
(Proof of ) Assume that for some . Since and , by Lemma 6 we get:
We first prove that . Having proved this, we will show that , which will help us to establish that . As , this will suffice to prove that .
Suppose . Thus , and as , by Lemma 7(a) we have
Thus, (3) implies that . Combining this with the fact that (Since ), we obtain . Therefore,
Observe that . We can now calculate in a different way:
(5) and (8) together imply that , which establishes our final claim. By virtue of (7) and Lemma 4, we can also write: .
(Proof of ) We assume that for some codewords and in . Let us consider such that,
Since is an abelian group wherein any element is self-inverse, we can express as: . Then applying Lemma 2 gives us:
The above expression clearly indicates that,
(15) implies that . Using similar technique we can also obtain , which therefore gives us:
We proved that when for a linear code . However, this is not necessarily true when and do not belong to the code. For example, consider a code , where such that are distinct and . Define a commutative function as follows: and for all , while for any distinct . It is easy to verify that the addition is isometric, hence is a linear code. However, and . For two codewords in a binary linear code , irrespective of whether or not. This is in accordance with the fact that linearity is inherent in the entirety of a Hamming space. The same does not hold for projective spaces.
The Union-Intersection theorem helps us to bring out the lattice structure in a certain class of linear codes. To elaborate, we recall the definition of linear codes closed under intersection introduced in BK .
A linear code with the property that whenever is said to be a linear code closed under intersection.
According to Theorem 8 a linear code is closed under intersection if and only if it is also closed under the usual vector space addition. Since linear subspace codes are subsets of the associated projective lattice, the following statement follows as a direct consequence of Theorem 8.
Let be a linear subspace code. is a sublattice of the projective lattice if and only if is closed under intersection.
4 Pairwise Disjoint Codewords in Linear Subspace Codes
Two vectors in a Hamming space are disjoint if their intersection is empty. It is easy to verify that a set of pairwise disjoint vectors in are linearly independent over . We will prove an analogous result for linear codes in a projective space. First we formally define a set of pairwise disjoint codewords in a linear code.
A set of codewords in a linear subspace code is pairwise disjoint if
We are now going to establish that any set of pairwise disjoint codewords in a linear subspace code is linearly independent. To this end, we need certain properties of any finite number of pairwise disjoint codewords. First we prove the base case when .
If are pairwise disjoint nontrivial codewords in a linear subspace code then, for distinct . Furthermore, , and .
Since are pairwise disjoint codewords, using Lemma 4 the above equation can also be expressed as
which reduces to . But . Combining both, we get and by Lemma 4 this is equivalent to
We now claim that . Suppose not, then there must exist some nonzero such that , with and . Since the pairwise intersections of are trivial, neither nor belongs to . Also, both and are nonzero. Then , which means is in . Thus , which contradicts (17). Hence . Combining this with Lemma 4, we can write:
The rest follows from the fact that .
We are now in a position to prove the general case for any finite number of pairwise disjoint codewords.
Let be a set of pairwise disjoint nontrivial codewords in a linear subspace code . Then,
for all , ;
We prove (a)–(c) simultaneously by induction on , the number of pairwise disjoint codewords. The base case of two codewords for (b)–(c) is covered by Lemma 4 while that for (a) is because of the assumption of pairwise disjointness. As the induction hypothesis, assume that the statements (a)–(c) hold for any set of pairwise disjoint codewords, for some . In particular, for any -subset , we have and .
To prove (b), observe that according to part (a). Then, by the induction hypothesis and Lemma 4 we have:
Finally, parts (a), (b) and Lemma 4 imply: