A linear rank inequality is a linear inequality that is always satisfied by ranks (dimensions) of subspaces of a vector space over any field. Information inequalities are a sub-class of linear rank inequalities . The Ingleton inequality is an example of a linear rank inequality which is not information inequality . Other inequalities have been presented in [3, 7]. A characteristic-dependent linear rank inequality is as a linear rank inequality but this is always satisfied by vector spaces over fields of certain characteristic and does not in general hold over other characteristics. In Information Theory, especially in linear network coding, all these inequalities are useful to calculate the linear capacity of communication networks . It is remarkable that the linear capacity of a network depends on the characteristic of the scalar field associated to the vector space of the network codes, as an example, the Fano network [2, 4]. Therefore, when we study linear capacities over specific fields, characteristic-dependent linear rank inequalities are more useful than usual linear rank inequalities.
Characteristic-dependent linear rank inequalities have been presented by Blasiak, Kleinberg and Lubetzky , Dougherty, Freiling and Zeger , and E. Freiling . The technique used by Dougherty et al. to produce these inequalities used as a guide the flow of some matroidal network to obtain restriction over linear solubility of these and it is different from the technique used by Blasiak et al. which is based on the dependency relations of the Fano and non-Fano Matroids. In , we show some inequalities using the ideas of Blasiak and present some applications to network coding that improve some existing results in [1, 5].
Organization of the work and contributions. We show a general method to produce characte-ristic-dependent linear rank inequalities using as a guide binary matrices with suitable rank over different fields. We try to find as many inequalities as the method can produce: For each , we explicitly produce characteristic-dependent linear rank inequalities in variables of which half are true over characteristics in sets of primes of the form and the other half are true over characteristics in sets of primes of the form , where , but we note that more inequalities can be produced. Also, for the first class of inequalities, we prove that all are independent of each other and they can not be recovered from any of our inequalities in a greater number of variables. We remark that to date such number of inequalities of this type in variables were not known. In addition, the inequalities presented in  can be recovered when is of the form and is equal to .
2 Entropy in Linear Algebra
Let , , , , be vector subspaces of a finite dimensional vector space over a finite field . Let denote the span or sum of . The sum is a direct sum if and only if , the notation for such a sum is . Subspaces , …, are called mutually complementary subspaces in if every vector of has an unique representation as a sum of elements of , …, . Equivalently, they are mutually complementary subspaces in if and only if . In this case, denotes the canonical projection function . is the canonical bases in and is the vector whose inputs are in the components in and in another case.
There is a correspondence between linear rank inequalities and information inequalities associated to certain class of random variables induced by vector spaces[9, Theorem 2], we explain that: Let be chosen uniformly at random from the set of linear functions from to , for , , define the random variables , , , then
The difference between entropy and dimension is a fixed positive multiple scalar. Therefore, any inequality satisfied by entropies, it is an inequality satisfied by dimensions of vector spaces; for simplicity, we identify these parameters, i. e. the entropy of , , is
So, we can think , , as a tuple of random variables induced in described form, such random variables are called linear random variables over .
The mutual information of and is given by If is a subspace of a subspace , then we denote the codimension of in by We have that . In a similar way conditional mutual information is expressed.
We formally define the inequalities that concern this paper:
Let be a positive integer, let be a set of primes, and let , , be subsets of . Let for . A linear inequality of the form
- is called a characteristic-dependent linear rank inequality
if it holds for all jointly distributed linear random variables, , over finite fields with characteristic in .
- is called a linear rank inequality if it is a characteristic-dependent linear rank inequality with is equal to the collection of all prime numbers.
- is called an information inequality if the inequality holds for all jointly distributed random variables.
The following inequality is the first linear rank inequality which is not information inequality.
(Ingleton’s inequality ) For any , , and vector subspaces of a finite dimensional vector space,
We are interested in finding interesting characteristic-dependent linear rank inequalities i.e. where is a proper subset of primes.
2.1 Producing inequalities: How to find and use a suitable binary matrix
The following theorem is the principal theorem of this paper and shows a method to produce pairs of characteristic-dependent linear rank inequalities from suitable binary matrices. The demonstration is presented in subsection 2.2. We use this notation: , and ; for a binary matrix , we denote , with .
Let be a binary matrix over , and integer. We suppose that if does not divide , and in other cases. Let , , , and be vector subspaces of a finite dimensional vector space over Then
(i) The following inequality is a characteristic-dependent linear rank inequality over fields whose characteristic divides ,
(ii) The following inequality is a characteristic-dependent linear rank inequality over fields whose characteristic does not divide ,
where ; ; if there exists in such that , and is empty in other case; and is a finite sum of entropies given by
where give a partition in intervals, with maximum length, of .
The first inequality does not hold in general over vector spaces whose characteristic does not divide and the second inequality does not hold in general over vector spaces whose characteristic divides . A counter example would be in , take the vector spaces , , , , and Then, when does not divide , first inequality does not hold; and when divides , second inequality does not hold.
If the dimension of vector space is at most , then inequalities implicated in Theorem 3 are true over any field.
If some vector space in Theorem 3 is the zero space, the inequalities implicated are linear rank inequalities.111One can use software such as Xitip to note that they most be Shannon information inequalities.
Below is shown the class of inequalities that are true over finite sets of primes (i.e. sets of the form ), and another class of inequalities that are true over co-finite sets of primes (i.e. sets of the form ).
Taking and setting integer such that and , the following inequalities are produced using as a guide square matrices with column vectors of the form , , with as described in figure 1-left side. The rank of is when does not divide and is in other case. We remark that in  we used the case , so the columns of the matrices were only of the form presented in figure 1-right side. Let , , , , , , , , be subspaces of a finite-dimensional vector space over a scalar field . We have:
(a) If divides ,
(b) If does not divide ,
Corollary 5 shows that each inequality, presented in example 6, can not be deduced from a higher order inequality by nullifying some variables. In fact, using Corollary 4, we can say more about the class (a) of these inequalities.
For and prime, the function that counts all the powers of less than or equal to is denoted by . In example 6, inequalities in variables, which are true over fields whose characteristic is , are produced. By Corollary 4, each of these inequalities holds over any characteristic when the dimension of is at most . Also, each inequality is determined by and this number can run the powers of less than or equal to . This means that each inequality is true in at least one vector space where the other inequalities are not true. Therefore, any of these inequalities can not be deduced from the other inequalities, much less if they are combined with linear rank inequalities, without violating this property. We have the next corollary.
For each and prime. There are at least independent inequalities in variables which are characteristic-dependent linear rank inequalities that are true over fields whose characteristic is .
2.2 Proof of Theorem 3:
In a general way, we show how to build characteristic-dependent linear rank inequalities from dependency relations in certain type of binary matrices. We show this in three steps:
A. Finding an equation.
B. Conditional characteristic-dependent linear rank inequalities.
C. Characteristic-dependent linear rank inequalities.
First to all, we show how to abstract an equation as presented in [8, Lemma 3]. Second, how to define “conditional-linear rank inequalities” as presented in [8, Lemma 5 and 6]. Third, the technique of upper bounds used in [1, for a particular case] and improved in [8, for a family of binary matrices] is applied.
A. Finding an equation: Let and . Let be a binary matrix over , . We make the following correspondence between the columns of and the canonical projection functions on :
We suppose that if does not divide , and the if divides , for . Having account the previous correspondence, we can define the following propositions whose proof is omitted:
We get an equation of the form:
Previous argument can be easily generalized to vector subspaces , , , of a vector space over a field , where , , , are mutually complementary in and is such that the sum of and is a direct sum for all , such a collection of spaces is called tuple that satisfies condition of complementary vector spaces. Formally, we have:
When exists, a tuple that satisfies condition of complementary vector spaces holds
B. Conditional characteristic-dependent linear rank inequalities: In the previous step we noticed that dependence relations of can be expressed using projections of a suitable space . In fact, we can derive more properties as follows, from , we derive
This equality is easily proven. The following claim use this to find inequalities that depend on the characteristic of , and the involved spaces have some dependency relationships expressed by . We denote by ; ; if there exists in such that , and is empty in other case.
For a tuple of vector subspaces such that satisfies the condition of complementary vector subspaces, consider the following conditions:
(i) for such that . (ii) for .
(iii) for . (iv) for .
We have that
a. If condition (i), (ii) and (iii) hold over a fields whose characteristic divides , then
b. If condition (ii) and (iv) hold over a fields whose characteristic does not divide , then
C. Characteristic-dependent linear rank inequalities: We find vector subspaces that satisfy conditions of previous claim. Let a tuple of arbitrary vector subspaces of a finite dimensional vector space over a finite field .
From , , , and , we obtain a tuple that satisfies condition of complementary spaces as obtained in  which holds:
Additionally, for , we can take some elements with such that it is possible to built a partition in intervals , with maximum length, of . So,
Before continuing, we need the following three statements:
Tuple defined by
satisfies condition of complementary spaces, condition (i), and
We obviously have that . Now, for , we show that
In effect, we show case , the general case is by induction. We note that if and only if , so this case is trivial. Otherwise, there exists in and in such that , then . The affirmation is obtained noting that generates [8, Remark 4]. Taking , we obtain that , so condition of complementary spaces is satisfied. Also, we have the equation:
which also implies the desired upper bound on . Now, condition (i) is straightforward because and each , , have the same dimension. ∎
For , we define . We have that tuple satisfies condition (i), condition (ii), condition (iii) and