1. Introduction
An alternate way of looking at the rank of a matrix is as the smallest integer such that you can write as a sum of rank matrices. The definition of tensor rank is a generalization of this idea. We consider the tensor space , where
denote finite dimensional vector spaces over a field
. Tensors of the form with are called simple (or rank ) tensors.Definition 1.1.
The tensor rank of a tensor is defined as the smallest integer such that for simple tensors .
Let denote the set of tensors of rank . Unfortunately, is not Zariskiclosed, giving rise to the notion of border rank.
Definition 1.2.
The border rank of a tensor is the smallest integer such that
The tensor and border rank of certain tensors have been well studied due to their connections with computational complexity. For instance, the multiplication of an matrix with an matrix is a bilinear map, and can hence be seen as a tensor in , where is the space of matrices. Every expression for this matrix multiplication tensor in terms of simple tensors gives rise to an algorithm for matrix multiplication, and hence the rank of this tensor is a measure of the complexity of the problem. For example, the tensor and border rank of the matrix multiplication tensor for is , and the complexity of the resulting algorithm (Strassen’s algorithm) is . Various improvements have since been made, see [2, 12].
In this paper, we study the determinant and permanent tensors. Let denote the standard basis for , and let denote the symmetric group on letters. The determinant tensor is
where is the sign of the permutation .
Similarly, the permanent tensor is defined as
The determinant and permanent tensors have been studied before, see [3] for known upper and lower bounds. For , the tensor rank of and were precisely determined by Ilten and Teitler in [10] to be and respectively. This is done by analyzing certain Fano schemes parametrizing linear subspaces contained in the hypersurfaces and . Using simpler linear algebraic techniques, Derksen and Makam [6] show that the border rank (and tensor rank) of and are and respectively. Moreover, they show that this holds for all fields of characteristic not equal to two. Observe that in characteristic , the determinant and permanent tensors are equal. In this paper, we remove the dependence on the characteristic of the field for the tensor rank of the determinant. The main result of this paper is the following:
Theorem 1.3.
For any field , the tensor rank of is .
This allows us to extend a result of Derksen in [3] to arbitrary characteristic.
Corollary 1.4.
For any field , we have .
1.1. Organization
In Section 2 we present the standard methods for obtaining lower bounds on tensor and border rank, specifically flattenings and Koszul flattenings. In Section 3, we study and and show upper bounds via computeraided search and lower bounds via case analysis and flattenings. We then study the and determinant and permanent tensors in Section 4, and the symmetric determinant and permanent tensors in Section 5. Finally, we turn our attention to binary tensors, and study how the rank of binary tensors varies as you change the characteristic of the underlying field in Section 6.
2. Lower Bounds
In this section, we recount the standard techniques for showing lower bounds on the tensor and border rank of tensors. We first describe Strassen’s Theorem for deriving lower bounds, and then we introduce flattenings in order to give a slight generalization of the result. Finally, we describe Koszul flattenings. The generalization of Strassen’s Theorem that we present is a special case of Koszul flattenings.
2.1. Flattenings
The core argument to obtain lower bounds is the following. Suppose is a polynomial that vanishes on , the set of all tensors of tensor rank . Now, if for some tensor , then we can deduce that . In fact we even have . This is because the zero set of is a Zariski closed set containing and hence contains . Hence , which means .
Thus we can lower bound the tensor rank or border rank of if we can find polynomials that vanish on . It turns out that it is difficult to find these polynomials in general for large . One of the first nontrivial results in this direction was given by Strassen on socalled slice tensors, see [2].
Theorem 2.1 (Strassen).
Let for . Viewing as , if is invertible, then
In essence, Strassen’s Theorem says for any tensor as above, if is the rank of , then the minors of vanish on tensors of border rank less than . A more modern way to view the above theorem is as follows.
Proposition 2.2.
Let be as in Theorem 2.1, then
When is invertible, the following (block) Gaussian elimination procedure shows that we recover Strassen’s result:
We remark here that while doing Gaussian elimination of block matrices, one can add a left multiplied block row to other block rows and a right multiplied block column to other block columns (see [4]). The above Proposition is a generalization of Strassen’s result, because it doesn’t require to be invertible
Proposition 2.2 is a special case of a flattening. A flattening of a tensor space is any linear map . The following straightforward proposition shows how flattenings can be used to show explicit lower bounds on the border rank of tensors.
Proposition 2.3.
Let be a linear map. Suppose that for all we have , then for any tensor , we have
We can now prove Proposition 2.2.
2.2. Koszul flattenings
While it is difficult to find flattenings that give nontrivial lower bounds, one class of flattenings that have proven useful are Koszul flattenings.
Landsberg has constructed some explicit tensors in of border rank at least (resp. ) for even (resp. odd), see [11]. The technique to show lower bounds was to use Koszul flattenings. In [5], the case of odd is improved from to using a concavity result from [4], and in [6], these results are extended to an arbitrary field.
To describe Koszul flattenings, we first construct a linear map as follows. Let and be positive integers, and and be and dimensional vector spaces over respectively. Let denote the exterior power of the vector space . We have a linear map given by .
For the tensor space , we define the Koszul flattening where in the following way. First we have the map
Identifying with , we get a map
where the last map is given by the Kronecker product of matrices. Composing the above maps, we get .
Before we can analyze the effect of the Koszul flattening on simple tensors, note the following property of , see for example [5].
Lemma 2.4.
For all , .
We can now show how the Koszul flattening gives us lower bounds on tensor rank.
Lemma 2.5.
For any , .
Proof.
By the above lemma, we see that takes a rank tensor to a tensor of rank . Since all subsequent maps do not increase tensor rank, we get the required conclusion. In fact, a more careful analysis will show that for any simple tensor , but we will not need it. ∎
Proposition 2.6.
For a tensor , we have
The flattening in Proposition 2.2 is a special case of a Koszul flattening for and .
3. Tensor rank of the determinant tensor
In the first part of this section, we first consider the known upper bounds for the ranks of and , and then give explicit expressions for them that hold in any characteristic. We then use the flatteningbased techniques to give matching lower bounds in the second part.
3.1. Upper bounds
An explicit expression for a tensor in terms of simple tensors naturally gives us an upper bound for tensor rank and border rank of . Glynn’s formula (see [9]) for the permanent tensor is
In particular, this shows that
as long as characteristic is not two. For the determinant tensor, known upper bounds are much weaker. The best known upper bound comes from Derksen’s formula (see [3]) for .
Derksen uses this expression along with Laplace expansions to show
Unfortunately, both Glynn’s and Derksen’s expressions fail in characteristic two because they have denominators which are multiples of two. Hence the only known upper bound for the tensor rank of and was , given by the defining expression.
We find expressions for both and as a sum of simple tensors that are valid over any field . To do this, we first consider over , the field of two elements (note that over ). With the help of a computer, we found a way to write as a sum of simple tensors. Then by carefully choosing the signs, we were able to find expressions for both and that work for any field. We have
and
Corollary 3.1.
.
3.2. Lower Bounds
In every characteristic other than two, a direct application of Proposition 2.2 gives us that , see [6]. Let us recall the determinant tensor
Identifying with via , we can identify with . Under this identifcation, we have
We recall the proof of the following proposition from [6], as we will modify the proof to remove the dependence on characteristic.
Proposition 3.3 ([6]).
We have if .
Proof.
Applying Proposition 2.2, we get that
This matrix contains only 12 nonzero entries of the form . Six of these entries (marked red) are in a column or a row with no other nonzero entry, reducing our computation to a minor . This minor has rank as long as characteristic is not two, and hence we have . But since border rank is an integer, we have . On the other hand, we have by the expression in Section 3.1, giving us the required conclusion.
∎
The problem with this argument in characteristic two is that the aforementioned minor has rank instead of . This only gives that . Nevertheless, we are able to modify the argument to show that the tensor rank of the determinant is . First we need a simple lemma.
Lemma 3.4.
Let . Suppose for every rank tensor , then we have .
Proof.
Suppose , then we have with , where are rank tensors. Now, take to see that contradicting the hypothesis. ∎
Now, we are ready to prove Theorem 1.3.
Proof of Theorem 1.3.
We want to prove that . By the above lemma, it suffices to prove that for every rank tensor . Observe that acts on by for and . The action of preserves tensor rank and border rank since it is a linear map preserving the set of rank tensors. There is also an action of the symmetric group on three letters that permutes the tensor factors. This action too preserves tensor rank and border rank, and further it commutes with the action of . Thus we have an action of on that preserves tensor rank and border rank. Further, the tensor is invariant under this action.
Now, let be a rank tensor. We want to show . There are cases.

Case 1: are linearly independent. Then w.l.o.g, we can assume , by applying the action of an appropriate . Now, apply Proposition 2.2 to to get
Once again observe that the red entries are in a column or row with no other nonzero entries, reducing our computation to a minor. It is easy to check that this gives in all characteristic.

Case 2: The span is 2dimensional. In this case, w.l.o.g, can assume , by using the action of as in the previous case. Now, apply Proposition 2.2 to to get
Applying the row transformation and the column transformations and , we see that we are back to computing the rank of the matrix in Proposition 3.3, which as we have seen is at least in all characteristics. Hence as required.

Case 3: The span is 1dimensional. Once again, w.l.o.g, . We are reduced to computing the rank of the matrix
But again, the row transformations and , puts us back to computing the rank of the matrix in Proposition 3.3. The rest of the analysis is as in the previous case.
∎
While we have successfully computed the tensor rank, the border rank still remains undetermined.
Problem 3.5.
What is the border rank of over an algebraically closed field of characteristic two?
4. and determinant and permanent tensors
In this section we study the ranks of the and determinant and permanent tensors. Assume is a field of characteristic . From the results in [6], we know that has strictly larger tensor rank and border rank than , i.e.,
We would like to separate and for larger . The upper bounds we know for the tensor rank and border rank for are stronger than the ones we know for . On the other hand the best known lower bounds for both are the same, see [3]. Using Koszul flattenings, we can separate and .
Proposition 4.1.
We have
Proof.
The upper bounds are due to Glynn and Derksen as mentioned in Section 3.1. The lower bounds come from applying Proposition 2.6. This requires finding the rank of a large matrix, which we do with the help of a computer. We omit the details, referring the interested reader to the Python code available at [1]. ∎
Using the same technique as above, we get the following bounds for the tensor rank and border rank of and .
Proposition 4.2.
We have
and
Hence, Koszul flattenings are not powerful enough to separate and . Moreover, we point out that Koszul flattenings are helpful only for finding lower bounds when is odd.
5. symmetric determinant and permanent tensors
A more natural notion is to consider the determinant and permanent as homogeneous polynomials, which in the language of tensors correspond to symmetric tensors. Assuming is an algebraically closed field of characteristic , we think of homogeneous polynomials in variables of degree as elements of .
We define the symmetric determinant tensor
and the symmetric permanent tensor
Proposition 5.1.
We have .
Proof.
There is a notion of symmetric rank for a symmetric tensor.
Definition 5.2.
The symmetric rank of a symmetric tensor is the smallest such that , with .
Let denote the set of symmetric tensors of rank . Once again, need not be a Zariski closed set.
Definition 5.3.
We define symmetric border rank of a tensor as the smallest such that .
For a symmetric tensor, we know that the symmetric rank is at least as big as the tensor rank. Hence, we recover a result of Farnsworth in [8].
Corollary 5.4 ([8]).
We have .
6. Binary tensors
Informally, binary tensors are those whose entries are 0 and 1. These tensors are interesting because they live in tensor spaces of all characteristics. A natural question is, for a fixed binary tensor , how does its tensor rank vary as we change the characteristic?
Formally, let be a dimension vector, and let , where for any . Consider the free module . Let denote the standard basis for . Then for each , we write . The set form a basis for .
Definition 6.1.
A binary tensor is a tensor of the form , where or .
We can compare the ranks of binary tensors across different fields. For any field , we can consider . Any tensor can be viewed as a tensor in by considering . We will abuse notation, and refer to this tensor by as well.
Definition 6.2.
For a binary tensor , we define as the tensor rank of . We define for any algebraically closed field of characteristic .
We leave it to the reader to verify that the above definition of does not depend on the choice of algebraically closed field. It is easy to find examples of tensors for which . Recall that for matrices over a field, tensor rank coincides with the usual definition of matrix rank.
Proposition 6.3.
Let be a prime, and consider
Then we have and .
Proof.
For any field , we can think of as a linear automorphism of . The vector
is an eigenvector with eigenvalue
, and let denote the dimensional space spanned by . The linear map descends to the quotient , and in this quotient, acts by the scalar as is evident from the fact that .In short, we have that the eigenvalues of are , giving us the required conclusion. ∎
For any matrix , we define the fold Kronecker product of . Since tensor rank for matrices coincides with the usual rank, and matrix rank is multiplicative w.r.t Kronecker products, we have the following.
Corollary 6.4.
We have
Comments
There are no comments yet.