Log In Sign Up

Matrix rigidity and the Croot-Lev-Pach lemma

Matrix rigidity is a notion put forth by Valiant as a means for proving arithmetic circuit lower bounds. A matrix is rigid if it is far, in Hamming distance, from any low rank matrix. Despite decades of efforts, no explicit matrix rigid enough to carry out Valiant's plan has been found. Recently, Alman and Williams showed, contrary to common belief, that the 2^n × 2^n Hadamard matrix could not be used for Valiant's program as it is not sufficiently rigid. In this note we observe a similar `non rigidity' phenomena for any q^n × q^n matrix M of the form M(x,y) = f(x+y), where f:F_q^n → F_q is any function and F_q is a fixed finite field of q elements (n goes to infinity). The theorem follows almost immediately from a recent lemma of Croot, Lev and Pach which is also the main ingredient in the recent solution of the cap-set problem.


page 1

page 2

page 3

page 4


Fourier and Circulant Matrices are Not Rigid

The concept of matrix rigidity was first introduced by Valiant in [Val77...

Recent Progress on Matrix Rigidity – A Survey

The concept of matrix rigidity was introduced by Valiant(independently b...

Rigid models of Presburger arithmetic

We present a description of rigid models of Presburger arithmetic (i.e.,...

Equivalence of Systematic Linear Data Structures and Matrix Rigidity

Recently, Dvir, Golovnev, and Weinstein have shown that sufficiently str...

On the Complexity of Learning with Kernels

A well-recognized limitation of kernel learning is the requirement to ha...

Rigid Matrices From Rectangular PCPs

We introduce a variant of PCPs, that we refer to as rectangular PCPs, wh...

Apportionment with Parity Constraints

In the classic apportionment problem the goal is to decide how many seat...

1 Introduction

We begin by defining the notion of matrix rigidity – a property of matrices that combines combinatorial conditions (Hamming distance) with algebraic ones (matrix rank). Recall that the Hamming distance between two vectors

over some alphabet is equal to the number of entries for which .

Definition 1.1 (Matrix rigidity).

The rank- rigidity of a matrix over a field , denoted , is defined as the minimum Hamming distance between and any matrix of rank at most . In other words, is equal to the smallest number of entries in that one needs to change in order to reduce the rank of to .

Specifying the field is important since some integer matrices can have much higher rigidity over the rational numbers than over finite fields (this holds true even if one only considers the rank itself).

The notion of matrix rigidity was introduced by Valiant [Val77]

in the context of studying the arithmetic circuit complexity of linear transformations. A

linear circuit is a model of computation in which the inputs represent the basic linear function and each gate takes two previously computed linear forms and outputs some linear combination of them with coefficients in the field. We measure the size of a linear circuit by counting the number of wires, and the depth by the longest path from input to output. A linear circuit with inputs and outputs computes a linear map

and many important linear maps (e.g., Fourier transform) can be computed efficiently in this model. One can even show that any use of multiplication gates can be eliminated (with negligible cost) when computing a linear map


One of the most important problems in theoretical computer science is to prove unconditional complexity lower bounds for realistic models of computation. Despite decades of attempts, we are still unable to prove super-linear circuit lower bounds (in any realistic model) for logarithmic depth circuits. In an early attempt to bridge this gap Valiant proved the following theorem.

Theorem 1.2 (Valiant [Val77]).

Let be an matrix over a field . If

for some then cannot be computed by linear circuits of size and depth (asymptotically, as grows111To be more precise, one would have to consider the rigidity of an infinite sequence of matrices indexed by .).

We can say that a matrix is ‘Valiant-rigid’ if it satisfies the rigidity parameters in the above theorem. It is straightforward to check that for any matrix and field , for any . Valiant proved that almost all matrices achieve this maximum rigidity: for almost all matrices , if is infinite and if is finite. However, since Valiant’s original paper, it remains an open problem to find an explicit ‘Valiant-rigid’ matrix. By ‘explicit’ we mean a matrix that can be produced in polynomial (in

) time by a Turing machine given

as input.

The current best rigidity lower bound for any explicit matrix is [Fri93, SSS97]. Until recently, the Hadamard matrix was conjectured to be Valiant-rigid over the rational numbers [Lok09]. A recent surprising result of Alman and Williams [AW17] showed that in fact the Hadamard matrix is not sufficiently rigid. Denoting , they showed that for every there exists such that .

The purpose of this note is to observe another ‘non-rigidity’ phenomenon for a related (large) family of matrices. The hope is that by understanding the reasons for this non-rigidity we can perhaps get closer to proving stronger rigidity results. Our main theorem is the following.

Theorem 1.3.

Let be any finite field and let be any function. Let be the matrix defined by for . Denoting we have that for any , there exists such that . The result holds for fixed and and sufficiently large.

One should note that, unlike the Hadamard matrix, these matrices are over a finite field and not over the rational numbers. Having non-rigid matrices over a finite field is a bit less surprising since there are more ‘ways’ for the rank to be low. It is an interesting open problem to determine if Theorem 1.3 still holds if one is allowed to take a function where is the rational numbers (or even the complex numbers). This will imply the results of [AW17] since the Hadamard matrix can be written over the complex numbers as

where represents the Hamming weight.

Another interesting question is that of replacing the group indexing the rows/columns with other groups. For example, taking matrices with entries but with an arbitrary function. Here one might expect to see higher rigidity since there are far fewer low rank matrices of this form (c.f, the recent work of Goldreich and Tal [GT15] on the rigidity of Toeplitz matrices).

1.1 The Croot-Lev-Pach (CLP) lemma

A cap set is a subset of with no non-trivial three-term arithmetic progressions. We think of as fixed and going to infinity. The cap set problem asks how the size of the largest possible cap set (denoted ) grows in terms of . It was an open question whether for some . Croot, Lev, and Pach [CLP17] used a variant of the polynomial method to solve the corresponding problem for (the ring mod 4) in the affirmative, proving a bound of for some , and soon afterwards Ellenberg and Gijswijt [EG17] adapted the CLP result to provide a positive answer to the cap set problem in for all . At the core of [CLP17] is a lemma saying that, if is a polynomial of not too high degree, then the matrix has very low rank (see below for the exact parameters). We observe that, since any function can be well approximated by such a polynomial, the matrix can be changed in a small number of entries to give the low rank matrix .

2 Proof of Theorem 1.3

Let denote the set of functions . Then, is an -vector space of dimension . A basis for this vector space is given by the set of monomials

Let us denote by the set of monomials in of total degree at most and by the set of polynomials of degree at most spanned by these monomials. Let denote the size of or equivalently the dimension of .

We start by stating the precise form of the CLP lemma. For completeness we include a short sketch of the proof.

Lemma 2.1 (CLP lemma [Clp17]).

Let and let denote the matrix with entries for . Then

Proof sketch.

To prove the claim we will show that with . To see how to do this observe that, for each monomial of degree at most , the terms in the expression all have degree in either or . Writing as a sum of monomials and grouping together terms with the same low degree parts (in first and then in ) gives the desired decomposition. ∎

The main power of the CLP lemma comes from the following quantitative observation. For a fixed and sufficiently large , the numbers behave approximately like a normal curve when we increase from to (the largest possible degree). Most of the mass will be concentrated around the middle

with the tails decaying exponentially fast. We will use the following (weak) estimate.

Claim 2.2.

For any prime power and any there exists such that, for sufficiently large , we have


By symmetry it is enough to bound . We reduce this problem to the binary alphabet case. We claim that for all . To see this, consider the injective mapping from into sending to the multilinear monomial

. For the binary case we can use the standard tail bounds for the Binomial distribution to get that

with going to zero with ( is the binary entropy function). Taking sufficiently small (as a function of and ) we can get , proving the claim. ∎

The following claim and corollary show that any function can be approximated well by a polynomial of sufficiently high degree.

Claim 2.3.

Suppose is a finite-dimensional vector space over a field and is a subspace of . Let be a basis for . Then, for any vector , we can modify of the coordinates of (in the basis ) to produce a vector that lies in .


Let and . There exists an rank- matrix such that is the image of the linear transformation given by . We need to show that there is a vector agreeing with on coordinates such that there exists with . We can find a size- subset such that the rows of indexed by span all the rows of . For , let . Let be the submatrix of consisting of the rows indexed by . Since is full rank, there exists exactly one such that . The other rows of are spanned by the rows of , so we can choose for each by multiplying the th row in by . Since , and we are done. ∎

Corollary 2.4.

Let be any function. Then, for all , there exists a polynomial that


This follows from the previous claim and from the fact that . ∎

We are now ready to prove our main result.

Proof of Theorem 1.3.

Let be as in the theorem and let . Using Claim 2.2 and Corollary 2.4, we can find and a polynomial of degree at most such that agrees with on all but values in . Let denote the matrix with entries and let denote the matrix of the same dimensions with entries . Then, and differ in at most entries in each row and in at most entries altogether. Now, by Lemma 2.1 (the CLP lemma) we have that . But and so, by the Chernoff-Hoeffding bound, we have for some depending on (which in turn depends on and on ). This concludes the proof. ∎


  • [AW17] J. Alman and R. Williams. Probabilistic Rank and Matrix Rigidity. arXiv:1611.05558, to appear in 49th ACM Symposium on Theory of Computing, 2017.
  • [CLP17] E. Croot, V. Lev, and P. Pach. Progression-free sets in are exponentially small. Annals of Mathematics, 185(1), pages 331-337, 2017.
  • [EG17] J. S. Ellenberg and D. Gijswijt. On large subsets of with no three-term arithmetic progression. Annals of Mathematics, 185(1), pages 339-343, 2017.
  • [Fri93] Joel Friedman. A note on matrix rigidity. Combinatorica, 13(2):235–239, 1993.
  • [GT15] O. Goldreich and A. Tal. Matrix Rigidity of Random Toeplitz Matrices. Computational Complexity, pages 1-46, 2016.
  • [GW15] O. Goldreich and A. Wigderson. ”On the Size of Depth-Three Boolean Circuits for Computing Multilinear Functions.” Electronic Colloquium on Computational Complexity, 43, pages 1-40, 2013.
  • [Lok09] S. V. Lokam. (2009). Complexity lower bounds using linear algebra. Foundations and Trends in Theoretical Computer Science, 4(1–2), pages 1-155, 2009.
  • [RS15] F. Rassoul-Agha and T. Seppäläinen. A Course on Large Deviations with an Introduction to Gibbs Measures. Graduate Studies in Mathematics, 162, American Mathematical Society, 2015.
  • [SSS97] M.A. Shokrollahi, D.A. Spielman, and V. Stemann. A remark on matrix rigidity. Information Processing Letters, 64(6):283 – 285, 1997.
  • [Val77] L. Valiant. Graph-theoretic arguments in low-level complexity. Mathematical Foundations of Computer Science, pages 162-176, 1977.