1 Introduction
A binary matrix is a matrix with entries equal to 0 or 1. All matrices considered in this paper are binary. The study of extremal problems of binary matrices has been initiated by the papers of Bienstock and Győri [1] and of Füredi [7]. Since these early works, most of the research in this area has focused on the concept of forbidden submatrices: a matrix is said to contain a pattern as a submatrix if we can transform into by deleting some rows and columns, and by changing 1entries into 0entries. This notion of submatrix is a matrix analogue of the notion of subgraph in graph theory.
The main problem in the study of patternavoiding matrices is to determine the extremal function , defined as the largest number of 1entries in an binary matrix avoiding the pattern as submatrix. This is an analogue of the classical Turántype problem of finding a largest number of edges in an vertex graph avoiding a given subgraph. Despite the analogy, the function may exhibit an asymptotic behaviour not encountered in Turán theory. For instance, for the pattern^{1}^{1}1We use the convention of representing 1entries in binary matrices by dots and 0entries by blanks. Füredi and Hajnal [8] proved that , where is the inverse of the Ackermann function.
The asymptotic behaviour of for general is still not well understood. Füredi and Hajnal [8] posed the problem of characterising the linear patterns, i.e., the patterns satisfying . Marcus and Tardos [15] proved that whenever is a permutation matrix, i.e., has exactly one 1entry in each row and each column. This result, combined with previous work of Klazar [12], has confirmed the longstanding Stanley–Wilf conjecture. However, the problem of characterising linear patterns is still open despite a number of further partial results [3, 6, 11, 19, 9, 17].
Fox [5] has introduced a different notion of containment among binary matrices, based on the concept of interval minors. Informally, a matrix contains a pattern as an interval minor if we can transform into by contracting adjacent rows or columns and changing 1entries into 0entries; see Section 2 for the precise definition. In this paper, we mostly deal with containment and avoidance of interval minors rather than submatrices. Therefore, the phrases avoids or contains always refer to avoidance or containment of interval minors, and the term avoider always refers to a matrix that avoids as interval minor.
In analogy with , it is natural to consider the corresponding extremal function as the largest number of 1entries in an matrix that avoids as an interval minor. If contains as a submatrix, it also contains it as an interval minor, and therefore . Moreover, it can be easily seen that for a permutation matrix the two notions of containment are equivalent, and hence .
Fox [5] used interval minors as a key tool in his construction of permutation patterns with exponential Stanley–Wilf limits. In view of the results of Cibulka [2], this is equivalent to constructing a permutation matrix for which the limit of the ratio (which is equal to ) is exponential in the size of .
Even before the work of Fox, interval minors have been implicitly used by Guillemot and Marx [10], who proved that a permutation matrix which avoids as interval minor a fixed complete square pattern (i.e., a square pattern with all entries equal to 1) admits a type of recursive decomposition of bounded complexity. This result can be viewed as an analogue of the grid theorem from graph theory [18], which states that graphs avoiding a large square grid as a minor have bounded treewidth. Guillemot and Marx used their result on forbidden interval minors to design a lineartime algorithm for testing the containment of a fixed pattern in a permutation.
Subsequent research into intervalminor avoidance has focused on avoiders of a complete matrix. In particular, Mohar et al. [16] obtained exact values for the extremal function for matrices simultaneously avoiding a complete pattern of size and its transpose, and they obtained bounds for patterns of size . Their results were further generalised by Mao et al. [14] to a multidimensional setting.
While the functions exhibit diverse forms of asymptotic behaviour, the function is linear for every nontrivial pattern . This is a consequence of the Marcus–Tardos theorem and the fact that any binary matrix is an interval minor of a permutation matrix; see Fox [5]
. Therefore, in the intervalminor avoidance setting, it is not as natural to classify patterns by the growth of
alone as in the submatrix avoidance setting.In our paper, we instead classify the patterns based on the structure of the avoiders. We introduce the notion of line complexity of a binary matrix , as the largest number of maximal runs of consecutive 0entries in a single row or a single column of . We focus on the critical avoiders, which are the matrices that avoid as interval minor, but lose this property when any 0entry is changed into a 1entry.
Our main result is a sharp dichotomy for line complexity of critical avoiders. Let be defined as follows:
We show that if a pattern avoids the four patterns as interval minors (or equivalently, as submatrices), then the linecomplexity of every critical avoider is bounded by a constant depending only on . On the other hand, if contains at least one of the , then there are critical avoiders of size with line complexity , for any .
After properly introducing our terminology and proving several simple basic facts in Section 2, we devote Section 3 to the statement and proof of our main result. In Section 4, we discuss the possibility of extending our approach to general minorclosed matrix classes, and present several open problems.
2 Preliminaries
Basic notation.
For integers and , we let denote the set . We will also use the notation for the set , for the set , and for . We will avoid using for , however; instead, we will use the notation
to denote ordered pairs of integers.
We write for the set of binary matrices with rows and columns. We will always assume that rows of matrices are numbered toptobottom starting with 1, that is, the first row is the topmost.
For a matrix , we let denote the value of the entry in row and column of . We say that the pair is a 1entry of if , otherwise it is a 0entry. The set of 1entries of a matrix is called the support of , denoted by ; formally, .
We say that a binary matrix dominates a binary matrix , if the two matrices have the same number of rows and the same number of columns, and moreover, . In other words, can be obtained from by changing some 1entries into 0entries.
For a matrix and for a set of rowindices and columnindices , we let denote the submatrix of induced by the rows in and columns in . More formally, if and , then is a matrix such that for every .
A line in a matrix is either a row or a column of . We view a line as a special case of a submatrix. For instance, the th row of a matrix is the submatrix . A horizontal interval is a submatrix formed by consecutive entries belonging to a single row, i.e., a submatrix of the form where is a row index and are column indices. Vertical intervals are defined analogously.
We say that a submatrix of is empty if it does not contain any 1entries.
For a matrix and an entry , we let denote the matrix obtained from by changing the value of the entry from 0 to 1 or from 1 to 0.
Interval minors.
A row contraction in a matrix is an operation that replaces a pair of adjacent rows and by a single row, so that the new row contains a 1entry in a column if and only if at least one of the two original rows contained a 1entry in column . Formally, the row contraction transforms into a matrix whose entries are defined by
A column contraction is defined analogously.
We say that a matrix is an interval minor of a matrix , denoted , if we can transform by a sequence of row contractions and column contractions to a matrix that dominates . When is an interval minor of , we also say that contains , otherwise we say that avoids , or is avoiding.
There are several alternative ways to define interval minors. One possible approach uses the concept of matrix partition. For and , a partition of containing is the sequence of row indices and column indices with and , such that for every 1entry of , the submatrix has at least one 1entry. See Figure 1.
An embedding of a matrix into a matrix is a function with the following properties:

If is a 1entry of , then is a 1entry of .

Let and be two entries of , and suppose that and . If then , and if then .
Notice that in an embedding of into , two entries of belonging to the same row may be mapped to different rows of , and similarly for columns.
In practice, it is often inconvenient and unnecessary to specify completely an embedding of into . In particular, it is usually unnecessary to specify the image of all the 0entries in . This motivates the notion of partial embedding, which we now formalise. Consider again binary matrices and . Let be a nonempty subset of . We say that a function is a partial embedding of into if the following holds:

If is a 1entry of , then is in and is a 1entry of .

An entry is mapped by to an entry of satisfying the following inequalities: , , and . Informally, the entry is at least as far from the top, left, bottom and right edge of the corresponding matrix as the entry .

Let and be two entries in , with and . If then , and if then .
For a partial embedding of a pattern into a matrix , the image of (with respect to ) is the set of entries in the matrix . Note that all the entries in the image of are 1entries.
Lemma 2.1.
For matrices and the following properties are equivalent.

is an interval minor of .

has a partition containing .

has an embedding into .

has a partial embedding into .
Proof.
We will prove the implications .
To see that 2 implies 1, suppose has a partition containing , determined by row indices and column indices , where we may assume that , and . We may then contract the rows from each interval of the form into a single row, and contract the columns from each interval to a single column, to obtain a matrix that dominates .
To see that 1 implies 3, suppose that is an interval minor of . This means that there is a sequence of matrices with , where is a matrix that dominates , and for each , the matrix can be obtained from by contracting a pair of adjacent rows or columns. We can then easily observe that for every there is an embedding of into . Indeed, reasoning by induction, the embedding is the identity map, and for a given , if there is an embedding of into , then an embedding can be obtained by an obvious modification of .
Clearly, 3 implies 4, since every embedding is also a partial embedding.
To show that 4 implies 2, assume that is a partial embedding of into . We will define a sequence of row indices with these two properties:

For each entry that belongs to row of , the entry belongs to a row of for some .

If contains at least one entry from row in , then contains an entry in row such that is in row of .
We define the numbers inductively, starting with . Suppose that have been defined, for some . If contains no entry from row of , define . On the other hand, if contains an entry from row , we let be the largest row index of such that maps an entry from row of to an entry in row of . Notice that any entry that does not belong to the first rows of must be mapped by to an entry strictly below row of , otherwise would not satisfy the properties of a partial embedding.
In an analogous way, we also define a sequence of column indices . These sequences will satisfy that for every we have . Since is a partial embedding, contains all the 1entries of , and maps these 1entries to 1entries of . In particular, the sequences and form a partition of containing . ∎
Minorclosed classes.
For a matrix , we let denote the set of all binary matrices that do not contain as an interval minor. We call the matrices in the avoiders of , or avoiders.
More generally, if is a set of matrices, we let denote the set of binary matrices that avoid all elements of as interval minors.
We call a set of binary matrices a minorclosed class (or just class, for short) if for every matrix , all the interval minors of are in as well. Clearly, is a class, and for every class there is a (possibly infinite) set such that . A principal class is a class of matrices determined by a single forbidden pattern, i.e., a class of the form for a matrix .
For a class of matrices, we say that a matrix is critical for if the change of any 0entry of to a 1entry creates a matrix that does not belong to . In other words, is critical for if it is not dominated by any other matrix in . For a pattern , we let be the set of critical matrices for , and similarly for a set of patterns , is the set of all critical matrices for .
2.1 Simple examples of avoiders
We conclude this section by presenting several examples of avoiders of certain simple patterns. These examples will play a role in Section 3, in the proof of our main result. We begin with a very simple example, which we present without proof.
Observation 2.2.
Let be the matrix with 1 row and columns, whose every entry is a 1entry (see Figure 2). A matrix avoids if and only if has at most nonempty columns. Consequently, is a critical avoider if and only if is a union of columns.
Next, we will consider the diagonal patterns , defined by , and their mirror image , defined by (see again Figure 2). To describe the avoiders of these patterns, we first introduce some terminology.
Let and be two entries of a matrix . An increasing walk from to in is a set of entries such that , , and for every we have either and (that is, is to the right of ), or and (that is, is above ). A decreasing walk is defined analogously, except now is either to the right or below .
We say a matrix is an increasing matrix if is a subset of an increasing walk. A decreasing matrix is defined analogously. See Figure 3.
Proposition 2.3.
A matrix avoids the pattern if and only if contains increasing walks from to such that
Proof.
Clearly, if contains , then has 1entries no two of which can belong to a single increasing walk, and therefore cannot be covered by increasing walks.
Suppose now that avoids . Consider a partial order on the set , defined as and . Since avoids , this order has no chain of length . By the classical Dilworth theorem [4], is a union of antichains of . We may easily observe that each antichain of is contained in an increasing walk. ∎
Proposition 2.3 shows, in particular, that a matrix avoids the pattern if and only if is an increasing matrix. By symmetry, avoids if and only if it is a decreasing matrix.
Another direct consequence of the proposition is the following corollary, describing the structure of critical avoiders.
Corollary 2.4.
A critical avoiding matrix contains increasing walks from to such that .
Note that Corollary 2.4 only gives a necessary condition for a matrix to be a critical avoider, therefore it is not a characterisation of critical avoiders. With only a little bit of extra effort, we could state and prove such a characterisation, but we omit doing so, as we do not need it for our purposes.
A simple but useful observation is that adding an empty row or column to the boundary of a pattern affects the avoiders in a predictable way. We state it here without proof.
Observation 2.5.
Let be a pattern, and let be the pattern obtained by appending an empty column to ; in other words, we have , and the last column of is empty. Then a matrix avoids if and only if the matrix obtained by removing the last column from avoids . Consequently, is in if and only if all the entries in the last column of are 1entries, and the preceding columns form a matrix from . Analogous properties hold for a pattern obtained by prepending an empty column in front of all the columns of , and also for rows instead of columns.
3 Line complexity
In the previous section, we have seen several examples of matrices avoiding a fixed pattern as interval minor. At a glance, it is clear that these matrices are highly structured. We would now like to make the notion of ‘highly structured matrices’ rigorous, and generalize it to other forbidden patterns.
We will focus on the local structure of matrices, i.e., the structure observed by looking at a single row or column. For a forbidden pattern
with at least two rows and two columns, it is not hard to see that any binary vector can appear as a row or column of a
avoiding matrix.However, the situation changes when we restrict our attention to critical avoiders. In the examples of critical avoiders we saw in Subsection 2.1, the 1entries in each row or column were clustered into a bounded number of intervals. In particular, for these patterns , there are only at most polynomially many vectors of a given length that may appear as rows or columns of a critical avoider.
In this section, we study this phenomenon in detail. We show that it generalizes to many other forbidden patterns , but not all of them. As our main result, we will present a complete characterisation of the patterns exhibiting this phenomenon.
Let us begin by formalising our main concepts.
A horizontal 0run in a matrix is a maximal sequence of consecutive 0entries in a single row. More formally, a horizontal interval is a horizontal 0run if all its entries are 0entries, or , and or . Symmetrically, a vertical interval is a vertical 0run if it is a maximal vertical interval that only contains 0entries. In the same manner, we define a (horizontal or vertical) 1run to be a maximal interval of consecutive 1entries in a single line of .
Note that each line in a matrix can be uniquely decomposed into an alternating sequence of 0runs and 1runs.
Let be a binary matrix. The complexity of a line of is the number of 0runs contained in this line. The rowcomplexity of is the maximum complexity of a row of , i.e., the least number such that each row has complexity at most . Similarly, the columncomplexity of is the maximum complexity of a column of .
For a class of matrices , we define its rowcomplexity, denoted , as the supremum of the rowcomplexities of the critical matrices in . We say that is rowbounded if is finite, and rowunbounded otherwise. Symmetrically, we define the columncomplexity of and the property of being columnbounded and columnunbounded. We say that a class is bounded if it is both rowbounded and columnbounded; otherwise, it is unbounded.
We stress that when defining the rowcomplexity and columncomplexity of a class of matrices, we only take into account the matrices that are critical for the class.
We are now ready to state our main result.
Theorem 3.1.
Let be a pattern. The class is rowbounded if and only if does not contain any of as an interval minor, where
Before we prove Theorem 3.1, we point out two of its direct consequences.
Corollary 3.2.
For a pattern , these statements are equivalent:

is rowbounded.

is columnbounded.

is bounded.
Corollary 3.3.
Let and be principal classes, and suppose that (or equivalently, ). If is bounded, then is bounded as well.
Although each of these two corollaries is stating a seemingly basic property of the boundedness notion, we are not able to prove either of them without first proving Theorem 3.1. We also remark that neither of the two corollaries can be generalized to nonprincipal classes of matrices, as we will see in Section 4.
Let us say that a pattern is rowbounding if is rowbounded, otherwise is nonrowbounding. Similarly, is bounding if is bounded and nonbounding otherwise.
Let be the set of patterns . Theorem 3.1 states that a pattern is rowbounding if and only if is in . To prove this, we will proceed in several steps. We first show, in Subsection 3.1, that if contains a pattern from , then is not rowbounding. This is the easier part of the proof, though by no means trivial. Next, in Subsection 3.2, we show that every pattern in is rowbounding. This part is more technical, and requires a characterisation the structure of the patterns in .
3.1 Nonrowbounding patterns
Our goal in this subsection is to show that any pattern that contains one of the matrices from is not rowbounding. Let us therefore fix such a pattern . Without loss of generality, we may assume that .
Theorem 3.4.
For every matrix such that , the class is rowunbounded.
Proof.
Refer to Figure 4. Let be a pattern containing as an interval minor. In particular, there are row indices and column indices such that .
For an arbitrary integer , we will show how to construct a matrix in of rowcomplexity at least . We first describe a matrix with and .
In the matrix , the leftmost columns, the rightmost columns, the topmost rows and the bottommost rows have all entries equal to 1. We call these entries the frame of .
In the th row of , there are 0entries appearing in columns for , and the remaining entries in row are 1entries.
The remaining entries of , that is, the entries in rows and columns , form a submatrix with rows and columns. We partition these entries into rectangular blocks, each block with rows and columns. For , let be such a block, with topleft corner in row and column . The entries in are all equal to 1 if , otherwise they are all equal to 0.
We claim that the matrix avoids . To see this, assume there is an embedding of into , and consider where maps the three 1entries , , and . Note that none of these three entries can be mapped into the frame of , and moreover, neither nor can be mapped to the th row of . In particular, is inside a block for some . Since is to the topleft of , it must belong to the same block . It follows that is in the leftmost column of , which is the column , and in its rightmost column, i.e., the column . Therefore, is in column ; however, all the entries in this column where could map are 0entries. Therefore is in .
The matrix is not necessarily a critical avoider. However, we can transform it into a critical avoider by greedily changing 0entries to 1entries as long as the resulting matrix stays in . By this process, we obtain a matrix that dominates . We claim that the th row of is the same as the th row of . This is because changing any 0entry in the th row of to a 1entry produces a matrix containing the complete pattern as a submatrix, and in particular also containing as a minor.
We conclude that the matrix has rowcomplexity at least , showing that is indeed rowunbounded. ∎
3.2 Rowbounding patterns
We now prove the second implication of Theorem 3.1, that is, we show that any pattern avoiding the four patterns in is rowbounding (and therefore, by symmetry, also columnbounding). We first prove a result describing the structure of the patterns .
We say that a matrix can be covered by lines if there is a set of lines such that each 1entry of belongs to some . The following fact is a version of the classical Kőnig–Egerváry theorem. We present it here without proof; a proof can be found, e.g., in Kung [13].
Fact 3.5 (Kőnig–Egerváry theorem).
A matrix cannot be covered by lines if and only if contains a set of 1entries, no two of which are in the same row or column.
Proposition 3.6.
If a pattern belongs to , then

avoids the pattern , or

avoids the pattern , or

can be covered by three lines.
Proof.
Assume cannot be covered by three lines. By Fact 3.5, contains four 1entries , , and , no two of which are in the same row or column. We may assume that . Moreover, since does not contain any pattern from , we see that any three entries among must form an image of or of . Consequently, the four entries form an image of or of , i.e., we must have either or . Suppose that holds, the other case being symmetric.
We will now show that avoids the pattern . Note first that the submatrix avoids , since an image of there would form an image of with . Therefore, by Proposition 2.3, all the 1entries in belong to a single decreasing walk from to . Symmetrically, all 1entries in the submatrix belong to a decreasing walk from to .
Moreover, there can be no 1entry in or in , since such a 1entry would form a forbidden pattern with and . We conclude that all the 1entries of belong to a single decreasing walk and therefore avoids . ∎
We note that Proposition 3.6 is not an equivalent characterisation of patterns from , since a matrix covered by three lines may contain a pattern from . Later, in Lemma 3.17, we will give a more precise description of the avoiders of that cannot be covered by two lines.
Relative rowboundedness.
Before we prove that each pattern in the set is rowbounding, we need some technical preparation. First of all, we shall need a more refined notion of rowboundedness, which considers individual 1entries of the pattern separately.
Let be a pattern, let be a 1entry of , let be a avoiding matrix, and let be a 0entry of . Recall that is the matrix obtained from by changing the entry from 0 to 1. We say that the entry of is critical for (with respect to ) if there is an embedding of into that maps to . Moreover, if is a 0run in , we say that is critical for if at least one 0entry in is critical for .
Note that a avoiding matrix is critical for if and only if each 0entry of is critical for at least one 1entry of .
Let be a 1entry of a pattern . Let be a matrix avoiding . The complexity of a row of relative to is the number of 0runs in row that are critical for . The rowcomplexity of relative to is the maximum complexity of a row of relative to , and the rowcomplexity of relative to , denoted , is the supremum of the rowcomplexities of the matrices in relative to . When is finite, we say that is rowbounded relative to and is rowbounding, otherwise is rowunbounded relative to .
Notice that in the definition of , we are taking supremum over all the matrices in , not just the critical ones. This makes the definition more convenient to work with, but it does not make any substantial difference. In fact, for a pattern with a rowbounding 1entry , the rowcomplexity relative to in is maximized by a critical avoider. To see this, suppose that is a avoiding matrix, is any critical avoiding matrix that dominates , and is a 0entry of that is critical for ; then is necessarily also a 0entry in , and is still critical for in . Therefore, the rowcomplexity of relative to is at least as large as the rowcomplexity of relative to .
Observe that the following inequalities hold for any pattern :
In particular, a pattern is rowbounding if and only if each 1entry of is rowbounding.
Lemma 3.7.
Let be a pattern, and let be a avoiding matrix. Let be a horizontal 0run of , and let be a 0entry in this 0run. Assume that there is an embedding of into . Then has a 1entry mapped by to , and moreover, every entry of in the same column as is mapped by to a column containing an entry from .
Proof.
Clearly, must map a 1entry of to the entry , otherwise would also be an embedding of into and would not be avoiding.
Suppose now that for a row and columns . Let be an entry of in the same column as . Suppose that maps to an entry in column , with . Assume that , the case being analogous. Then we may modify to map to the 1entry instead of , obtaining an embedding of into , which is a contradiction. ∎
Criteria for relative rowboundedness.
Let us first point out a trivial but useful fact: if is a pattern obtained from a pattern by reversing the order of rows (i.e., turning upside down) then a 1entry of is rowbounding if and only if the corresponding 1entry of is rowbounding. Analogous properties hold for reversing the order of columns or 180degree rotation. Similarly, operations that map rows to columns, such as transposition or 90degree rotation, will map rowbounding 1entries to columnbounding ones and vice versa.
We will now state several general criteria for rowboundedness of 1entries, which we will later use to show that any avoiding pattern is rowbounding.
Lemma 3.8.
If is a pattern with a row and a column such that , then every 1entry of in the interval is rowbounding (see Figure 6).
Proof.
Let be a 1entry of with . Let be a avoider, let be a 0entry of critical for , and let be the horizontal 0run containing .
We claim that in the row of , there are fewer than 1entries to the left of . Suppose this is not the case, i.e., row contains distinct 1entries , numbered left to right, all of them to the left of .
Let be an embedding of into which maps to . Recall from Lemma 3.7 that all the entries in column of are mapped to columns intersecting . In particular, all the entries from column are mapped to the right of .
We define a partial embedding of into , as follows. Firstly, maps the entries of to the 1entries of . Next, maps each 1entry of that is not among to the same entry as . We easily see that is a partial embedding of into , a contradiction.
Therefore, there are fewer than 1entries in row to the right of , and hence row has at most 0runs critical for . Consequently, and is rowbounding. ∎
The assumptions of Lemma 3.8 are satisfied when is the leftmost nonempty column of a pattern and is an arbitrary row. We state this important special case as a separate corollary.
Corollary 3.9.
Any 1entry in the leftmost nonempty column of a pattern is rowbounding.
Lemma 3.10.
Let be a pattern with a row , and two distinct columns , such that all the 1entries of in row belong to the interval . Moreover, if is a column index with , then has no 1entry in column except possibly for the entry . Suppose furthermore that satisfies one of the following three conditions (see Figure 6):

All the 1entries of above row are in a single row , and all the 1entries below row are in a single row .

All the 1entries of above row are in a single row , and all the 1entries below row are in the submatrix .

All the 1entries of above row are in the submatrix , and all the 1entries below row are in the submatrix .
Then every 1entry in the interval is rowbounding.
Proof.
Let be a pattern satisfying the assumptions, and let . We will show that for each 1entry of and every 
Comments
There are no comments yet.