On the structure of matrices avoiding interval-minor patterns

We study the structure of 01-matrices avoiding a pattern P as an interval minor. We focus on critical P-avoiders, i.e., on the P-avoiding matrices in which changing a 0-entry to a 1-entry always creates a copy of P as an interval minor. Let Q be the 3x3 permutation matrix corresponding to the permutation 231. As our main result, we show that for every pattern P that has no rotated copy of Q as interval minor, there is a constant c(P) such that any row and any column in any critical P-avoiding matrix can be partitioned into at most c(P) intervals, each consisting entirely of 0-entries or entirely of 1-entries. In contrast, for any pattern P that contains a rotated copy of Q, we construct critical P-avoiding matrices of arbitrary size n× n having a row with Ω(n) alternating intervals of 0-entries and 1-entries.

Authors

• 5 publications
• 2 publications
• Superregular matrices over small finite fields

A trivially zero minor of a matrix is a minor having all its terms in th...
08/01/2020 ∙ by Paulo Almeida, et al. ∙ 0

• The Largest Entry in the Inverse of a Vandermonde Matrix

We investigate the size of the largest entry (in absolute value) in the ...
08/03/2020 ∙ by Carlo Sanna, et al. ∙ 0

• On Minor Left Prime Factorization Problem for Multivariate Polynomial Matrices

A new necessary and sufficient condition for the existence of minor left...
10/14/2020 ∙ by Dong Lu, et al. ∙ 0

• An Explicit Construction of Gauss-Jordan Elimination Matrix

A constructive approach to get the reduced row echelon form of a given m...
07/29/2009 ∙ by Yi Li, et al. ∙ 0

• Hadamard matrices in {0,1} presentation and an algorithm for generating them

Hadamard matrices are square n× n matrices whose entries are ones and mi...
05/04/2021 ∙ by Ruslan Sharipov, et al. ∙ 0

• 2-nested matrices: towards understanding the structure of circle graphs

A (0,1)-matrix has the consecutive-ones property (C1P) if its columns ca...
03/05/2021 ∙ by Guillermo Durán, et al. ∙ 0

• Rigid Matrices From Rectangular PCPs

We introduce a variant of PCPs, that we refer to as rectangular PCPs, wh...
05/06/2020 ∙ by Amey Bhangale, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A binary matrix is a matrix with entries equal to 0 or 1. All matrices considered in this paper are binary. The study of extremal problems of binary matrices has been initiated by the papers of Bienstock and Győri [1] and of Füredi [7]. Since these early works, most of the research in this area has focused on the concept of forbidden submatrices: a matrix is said to contain a pattern as a submatrix if we can transform into by deleting some rows and columns, and by changing 1-entries into 0-entries. This notion of submatrix is a matrix analogue of the notion of subgraph in graph theory.

The main problem in the study of pattern-avoiding matrices is to determine the extremal function , defined as the largest number of 1-entries in an binary matrix avoiding the pattern as submatrix. This is an analogue of the classical Turán-type problem of finding a largest number of edges in an -vertex graph avoiding a given subgraph. Despite the analogy, the function may exhibit an asymptotic behaviour not encountered in Turán theory. For instance, for the pattern111We use the convention of representing 1-entries in binary matrices by dots and 0-entries by blanks. Füredi and Hajnal [8] proved that , where is the inverse of the Ackermann function.

The asymptotic behaviour of for general is still not well understood. Füredi and Hajnal [8] posed the problem of characterising the linear patterns, i.e., the patterns satisfying . Marcus and Tardos [15] proved that whenever is a permutation matrix, i.e., has exactly one 1-entry in each row and each column. This result, combined with previous work of Klazar [12], has confirmed the long-standing Stanley–Wilf conjecture. However, the problem of characterising linear patterns is still open despite a number of further partial results [3, 6, 11, 19, 9, 17].

Fox [5] has introduced a different notion of containment among binary matrices, based on the concept of interval minors. Informally, a matrix contains a pattern as an interval minor if we can transform into by contracting adjacent rows or columns and changing 1-entries into 0-entries; see Section 2 for the precise definition. In this paper, we mostly deal with containment and avoidance of interval minors rather than submatrices. Therefore, the phrases avoids or contains always refer to avoidance or containment of interval minors, and the term -avoider always refers to a matrix that avoids as interval minor.

In analogy with , it is natural to consider the corresponding extremal function as the largest number of 1-entries in an matrix that avoids as an interval minor. If contains as a submatrix, it also contains it as an interval minor, and therefore . Moreover, it can be easily seen that for a permutation matrix the two notions of containment are equivalent, and hence .

Fox [5] used interval minors as a key tool in his construction of permutation patterns with exponential Stanley–Wilf limits. In view of the results of Cibulka [2], this is equivalent to constructing a permutation matrix for which the limit of the ratio (which is equal to ) is exponential in the size of .

Even before the work of Fox, interval minors have been implicitly used by Guillemot and Marx [10], who proved that a permutation matrix which avoids as interval minor a fixed complete square pattern (i.e., a square pattern with all entries equal to 1) admits a type of recursive decomposition of bounded complexity. This result can be viewed as an analogue of the grid theorem from graph theory [18], which states that graphs avoiding a large square grid as a minor have bounded tree-width. Guillemot and Marx used their result on forbidden interval minors to design a linear-time algorithm for testing the containment of a fixed pattern in a permutation.

Subsequent research into interval-minor avoidance has focused on avoiders of a complete matrix. In particular, Mohar et al. [16] obtained exact values for the extremal function for matrices simultaneously avoiding a complete pattern of size and its transpose, and they obtained bounds for patterns of size . Their results were further generalised by Mao et al. [14] to a multidimensional setting.

While the functions exhibit diverse forms of asymptotic behaviour, the function is linear for every nontrivial pattern . This is a consequence of the Marcus–Tardos theorem and the fact that any binary matrix is an interval minor of a permutation matrix; see Fox [5]

. Therefore, in the interval-minor avoidance setting, it is not as natural to classify patterns by the growth of

alone as in the submatrix avoidance setting.

In our paper, we instead classify the patterns based on the structure of the -avoiders. We introduce the notion of line complexity of a binary matrix , as the largest number of maximal runs of consecutive 0-entries in a single row or a single column of . We focus on the critical -avoiders, which are the matrices that avoid as interval minor, but lose this property when any 0-entry is changed into a 1-entry.

Our main result is a sharp dichotomy for line complexity of critical -avoiders. Let be defined as follows:

 Q1=(∙∙∙), Q2=(∙∙∙), Q3=(∙∙∙) and Q4=(∙∙∙).

We show that if a pattern avoids the four patterns as interval minors (or equivalently, as submatrices), then the line-complexity of every critical -avoider is bounded by a constant depending only on . On the other hand, if contains at least one of the , then there are critical -avoiders of size with line complexity , for any .

After properly introducing our terminology and proving several simple basic facts in Section 2, we devote Section 3 to the statement and proof of our main result. In Section 4, we discuss the possibility of extending our approach to general minor-closed matrix classes, and present several open problems.

2 Preliminaries

Basic notation.

For integers and , we let denote the set . We will also use the notation for the set , for the set , and for . We will avoid using for , however; instead, we will use the notation

to denote ordered pairs of integers.

We write for the set of binary matrices with rows and columns. We will always assume that rows of matrices are numbered top-to-bottom starting with 1, that is, the first row is the topmost.

For a matrix , we let denote the value of the entry in row and column of . We say that the pair is a 1-entry of if , otherwise it is a 0-entry. The set of 1-entries of a matrix is called the support of , denoted by ; formally, .

We say that a binary matrix dominates a binary matrix , if the two matrices have the same number of rows and the same number of columns, and moreover, . In other words, can be obtained from by changing some 1-entries into 0-entries.

For a matrix and for a set of row-indices and column-indices , we let denote the submatrix of induced by the rows in and columns in . More formally, if and , then is a matrix such that for every .

A line in a matrix is either a row or a column of . We view a line as a special case of a submatrix. For instance, the -th row of a matrix is the submatrix . A horizontal interval is a submatrix formed by consecutive entries belonging to a single row, i.e., a submatrix of the form where is a row index and are column indices. Vertical intervals are defined analogously.

We say that a submatrix of is empty if it does not contain any 1-entries.

For a matrix and an entry , we let denote the matrix obtained from by changing the value of the entry from 0 to 1 or from 1 to 0.

Interval minors.

A row contraction in a matrix is an operation that replaces a pair of adjacent rows and by a single row, so that the new row contains a 1-entry in a column if and only if at least one of the two original rows contained a 1-entry in column . Formally, the row contraction transforms into a matrix whose entries are defined by

 M′(i,j)=⎧⎪⎨⎪⎩M(i,j) if ir.

A column contraction is defined analogously.

We say that a matrix is an interval minor of a matrix , denoted , if we can transform by a sequence of row contractions and column contractions to a matrix that dominates . When is an interval minor of , we also say that contains , otherwise we say that avoids , or is -avoiding.

There are several alternative ways to define interval minors. One possible approach uses the concept of matrix partition. For and , a partition of containing is the sequence of row indices and column indices with and , such that for every 1-entry of , the submatrix has at least one 1-entry. See Figure 1.

An embedding of a matrix into a matrix is a function with the following properties:

• If is a 1-entry of , then is a 1-entry of .

• Let and be two entries of , and suppose that and . If then , and if then .

Notice that in an embedding of into , two entries of belonging to the same row may be mapped to different rows of , and similarly for columns.

In practice, it is often inconvenient and unnecessary to specify completely an embedding of into . In particular, it is usually unnecessary to specify the image of all the 0-entries in . This motivates the notion of partial embedding, which we now formalise. Consider again binary matrices and . Let be a nonempty subset of . We say that a function is a partial embedding of into if the following holds:

• If is a 1-entry of , then is in and is a 1-entry of .

• An entry is mapped by to an entry of satisfying the following inequalities: , , and . Informally, the entry is at least as far from the top, left, bottom and right edge of the corresponding matrix as the entry .

• Let and be two entries in , with and . If then , and if then .

For a partial embedding of a pattern into a matrix , the image of (with respect to ) is the set of entries in the matrix . Note that all the entries in the image of are 1-entries.

Lemma 2.1.

For matrices and the following properties are equivalent.

• is an interval minor of .

• has a partition containing .

• has an embedding into .

• has a partial embedding into .

Proof.

We will prove the implications .

To see that 2 implies 1, suppose has a partition containing , determined by row indices and column indices , where we may assume that , and . We may then contract the rows from each interval of the form into a single row, and contract the columns from each interval to a single column, to obtain a matrix that dominates .

To see that 1 implies 3, suppose that is an interval minor of . This means that there is a sequence of matrices with , where is a matrix that dominates , and for each , the matrix can be obtained from by contracting a pair of adjacent rows or columns. We can then easily observe that for every there is an embedding of into . Indeed, reasoning by induction, the embedding is the identity map, and for a given , if there is an embedding of into , then an embedding can be obtained by an obvious modification of .

Clearly, 3 implies 4, since every embedding is also a partial embedding.

To show that 4 implies 2, assume that is a partial embedding of into . We will define a sequence of row indices with these two properties:

• For each entry that belongs to row of , the entry belongs to a row of for some .

• If contains at least one entry from row in , then contains an entry in row such that is in row of .

We define the numbers inductively, starting with . Suppose that have been defined, for some . If contains no entry from row of , define . On the other hand, if contains an entry from row , we let be the largest row index of such that maps an entry from row of to an entry in row  of . Notice that any entry that does not belong to the first rows of must be mapped by to an entry strictly below row of , otherwise would not satisfy the properties of a partial embedding.

In an analogous way, we also define a sequence of column indices . These sequences will satisfy that for every we have . Since is a partial embedding, contains all the 1-entries of , and maps these 1-entries to 1-entries of . In particular, the sequences and form a partition of containing . ∎

Minor-closed classes.

For a matrix , we let denote the set of all binary matrices that do not contain as an interval minor. We call the matrices in the avoiders of , or -avoiders.

More generally, if is a set of matrices, we let denote the set of binary matrices that avoid all elements of as interval minors.

We call a set of binary matrices a minor-closed class (or just class, for short) if for every matrix , all the interval minors of are in as well. Clearly, is a class, and for every class there is a (possibly infinite) set such that . A principal class is a class of matrices determined by a single forbidden pattern, i.e., a class of the form for a matrix .

For a class of matrices, we say that a matrix is critical for if the change of any 0-entry of to a 1-entry creates a matrix that does not belong to . In other words, is critical for if it is not dominated by any other matrix in . For a pattern , we let be the set of critical matrices for , and similarly for a set of patterns , is the set of all critical matrices for .

2.1 Simple examples of P-avoiders

We conclude this section by presenting several examples of avoiders of certain simple patterns. These examples will play a role in Section 3, in the proof of our main result. We begin with a very simple example, which we present without proof.

Observation 2.2.

Let be the matrix with 1 row and columns, whose every entry is a 1-entry (see Figure 2). A matrix avoids if and only if has at most nonempty columns. Consequently, is a critical -avoider if and only if is a union of columns.

Next, we will consider the diagonal patterns , defined by , and their mirror image , defined by (see again Figure 2). To describe the avoiders of these patterns, we first introduce some terminology.

Let and be two entries of a matrix . An increasing walk from to in is a set of entries such that , , and for every we have either and (that is, is to the right of ), or and (that is, is above ). A decreasing walk is defined analogously, except now is either to the right or below .

We say a matrix  is an increasing matrix if is a subset of an increasing walk. A decreasing matrix is defined analogously. See Figure 3.

Proposition 2.3.

A matrix avoids the pattern if and only if contains increasing walks from to such that

 supp(M)⊆W1∪W2∪⋯∪Wk−1.
Proof.

Clearly, if contains , then has 1-entries no two of which can belong to a single increasing walk, and therefore cannot be covered by increasing walks.

Suppose now that avoids . Consider a partial order on the set , defined as and . Since avoids , this order has no chain of length . By the classical Dilworth theorem [4], is a union of antichains of . We may easily observe that each antichain of is contained in an increasing walk. ∎

Proposition 2.3 shows, in particular, that a matrix avoids the pattern if and only if is an increasing matrix. By symmetry, avoids if and only if it is a decreasing matrix.

Another direct consequence of the proposition is the following corollary, describing the structure of critical -avoiders.

Corollary 2.4.

A critical -avoiding matrix contains increasing walks from to such that .

Note that Corollary 2.4 only gives a necessary condition for a matrix to be a critical -avoider, therefore it is not a characterisation of critical -avoiders. With only a little bit of extra effort, we could state and prove such a characterisation, but we omit doing so, as we do not need it for our purposes.

A simple but useful observation is that adding an empty row or column to the boundary of a pattern affects the -avoiders in a predictable way. We state it here without proof.

Observation 2.5.

Let be a pattern, and let be the pattern obtained by appending an empty column to ; in other words, we have , and the last column of is empty. Then a matrix avoids if and only if the matrix obtained by removing the last column from avoids . Consequently, is in if and only if all the entries in the last column of are 1-entries, and the preceding columns form a matrix from . Analogous properties hold for a pattern obtained by prepending an empty column in front of all the columns of , and also for rows instead of columns.

3 Line complexity

In the previous section, we have seen several examples of matrices avoiding a fixed pattern as interval minor. At a glance, it is clear that these matrices are highly structured. We would now like to make the notion of ‘highly structured matrices’ rigorous, and generalize it to other forbidden patterns.

We will focus on the local structure of matrices, i.e., the structure observed by looking at a single row or column. For a forbidden pattern

with at least two rows and two columns, it is not hard to see that any binary vector can appear as a row or column of a

-avoiding matrix.

However, the situation changes when we restrict our attention to critical -avoiders. In the examples of critical -avoiders we saw in Subsection 2.1, the 1-entries in each row or column were clustered into a bounded number of intervals. In particular, for these patterns , there are only at most polynomially many vectors of a given length that may appear as rows or columns of a critical -avoider.

In this section, we study this phenomenon in detail. We show that it generalizes to many other forbidden patterns , but not all of them. As our main result, we will present a complete characterisation of the patterns exhibiting this phenomenon.

Let us begin by formalising our main concepts.

A horizontal 0-run in a matrix is a maximal sequence of consecutive 0-entries in a single row. More formally, a horizontal interval is a horizontal 0-run if all its entries are 0-entries, or , and or . Symmetrically, a vertical interval is a vertical 0-run if it is a maximal vertical interval that only contains 0-entries. In the same manner, we define a (horizontal or vertical) 1-run to be a maximal interval of consecutive 1-entries in a single line of .

Note that each line in a matrix can be uniquely decomposed into an alternating sequence of 0-runs and 1-runs.

Let be a binary matrix. The complexity of a line of is the number of 0-runs contained in this line. The row-complexity of is the maximum complexity of a row of , i.e., the least number such that each row has complexity at most . Similarly, the column-complexity of is the maximum complexity of a column of .

For a class of matrices , we define its row-complexity, denoted , as the supremum of the row-complexities of the critical matrices in . We say that is row-bounded if is finite, and row-unbounded otherwise. Symmetrically, we define the column-complexity  of and the property of being column-bounded and column-unbounded. We say that a class is bounded if it is both row-bounded and column-bounded; otherwise, it is unbounded.

We stress that when defining the row-complexity and column-complexity of a class of matrices, we only take into account the matrices that are critical for the class.

We are now ready to state our main result.

Theorem 3.1.

Let be a pattern. The class is row-bounded if and only if does not contain any of as an interval minor, where

 Q1=(∙∙∙), Q2=(∙∙∙), Q3=(∙∙∙) and Q4=(∙∙∙).

Before we prove Theorem 3.1, we point out two of its direct consequences.

Corollary 3.2.

For a pattern , these statements are equivalent:

• is row-bounded.

• is column-bounded.

• is bounded.

Corollary 3.3.

Let and be principal classes, and suppose that (or equivalently, ). If is bounded, then is bounded as well.

Although each of these two corollaries is stating a seemingly basic property of the boundedness notion, we are not able to prove either of them without first proving Theorem 3.1. We also remark that neither of the two corollaries can be generalized to non-principal classes of matrices, as we will see in Section 4.

Let us say that a pattern is row-bounding if is row-bounded, otherwise is non-row-bounding. Similarly, is bounding if is bounded and non-bounding otherwise.

Let be the set of patterns . Theorem 3.1 states that a pattern is row-bounding if and only if is in . To prove this, we will proceed in several steps. We first show, in Subsection 3.1, that if contains a pattern from , then is not row-bounding. This is the easier part of the proof, though by no means trivial. Next, in Subsection 3.2, we show that every pattern in is row-bounding. This part is more technical, and requires a characterisation the structure of the patterns in .

3.1 Non-row-bounding patterns

Our goal in this subsection is to show that any pattern that contains one of the matrices from is not row-bounding. Let us therefore fix such a pattern . Without loss of generality, we may assume that .

Theorem 3.4.

For every matrix such that , the class is row-unbounded.

Proof.

Refer to Figure 4. Let be a pattern containing as an interval minor. In particular, there are row indices and column indices such that .

For an arbitrary integer , we will show how to construct a matrix in of row-complexity at least . We first describe a matrix with and .

In the matrix , the leftmost columns, the rightmost columns, the topmost rows and the bottommost rows have all entries equal to 1. We call these entries the frame of .

In the -th row of , there are 0-entries appearing in columns for , and the remaining entries in row are 1-entries.

The remaining entries of , that is, the entries in rows and columns , form a submatrix with rows and columns. We partition these entries into rectangular blocks, each block with rows and columns. For , let be such a block, with top-left corner in row and column . The entries in are all equal to 1 if , otherwise they are all equal to 0.

We claim that the matrix avoids . To see this, assume there is an embedding of into , and consider where maps the three 1-entries , , and . Note that none of these three entries can be mapped into the frame of , and moreover, neither nor can be mapped to the -th row of . In particular, is inside a block for some . Since is to the top-left of , it must belong to the same block . It follows that is in the leftmost column of , which is the column , and in its rightmost column, i.e., the column . Therefore, is in column ; however, all the entries in this column where could map are 0-entries. Therefore is in .

The matrix is not necessarily a critical -avoider. However, we can transform it into a critical -avoider by greedily changing 0-entries to 1-entries as long as the resulting matrix stays in . By this process, we obtain a matrix that dominates . We claim that the -th row of is the same as the -th row of . This is because changing any 0-entry in the -th row of to a 1-entry produces a matrix containing the complete pattern as a submatrix, and in particular also containing  as a minor.

We conclude that the matrix has row-complexity at least , showing that is indeed row-unbounded. ∎

3.2 Row-bounding patterns

We now prove the second implication of Theorem 3.1, that is, we show that any pattern avoiding the four patterns in is row-bounding (and therefore, by symmetry, also column-bounding). We first prove a result describing the structure of the patterns .

We say that a matrix  can be covered by lines if there is a set of lines  such that each 1-entry of belongs to some . The following fact is a version of the classical Kőnig–Egerváry theorem. We present it here without proof; a proof can be found, e.g., in Kung [13].

Fact 3.5 (Kőnig–Egerváry theorem).

A matrix  cannot be covered by lines if and only if contains a set of 1-entries, no two of which are in the same row or column.

Proposition 3.6.

If a pattern belongs to , then

1. avoids the pattern , or

2. avoids the pattern , or

3. can be covered by three lines.

Proof.

Assume cannot be covered by three lines. By Fact 3.5, contains four 1-entries , , and , no two of which are in the same row or column. We may assume that . Moreover, since does not contain any pattern from , we see that any three entries among must form an image of or of . Consequently, the four entries form an image of or of , i.e., we must have either or . Suppose that holds, the other case being symmetric.

We will now show that avoids the pattern . Note first that the submatrix avoids , since an image of there would form an image of with . Therefore, by Proposition 2.3, all the 1-entries in belong to a single decreasing walk from to . Symmetrically, all 1-entries in the submatrix belong to a decreasing walk from to .

Moreover, there can be no 1-entry in or in , since such a 1-entry would form a forbidden pattern with and . We conclude that all the 1-entries of belong to a single decreasing walk and therefore avoids . ∎

We note that Proposition 3.6 is not an equivalent characterisation of patterns from , since a matrix covered by three lines may contain a pattern from . Later, in Lemma 3.17, we will give a more precise description of the avoiders of that cannot be covered by two lines.

Relative row-boundedness.

Before we prove that each pattern in the set is row-bounding, we need some technical preparation. First of all, we shall need a more refined notion of row-boundedness, which considers individual 1-entries of the pattern separately.

Let be a pattern, let be a 1-entry of , let be a -avoiding matrix, and let be a 0-entry of . Recall that is the matrix obtained from by changing the entry from 0 to 1. We say that the entry of is critical for (with respect to ) if there is an embedding of into that maps to . Moreover, if is a 0-run in , we say that is critical for if at least one 0-entry in is critical for .

Note that a -avoiding matrix is critical for if and only if each 0-entry of is critical for at least one 1-entry of .

Let be a 1-entry of a pattern . Let be a matrix avoiding . The complexity of a row of relative to is the number of 0-runs in row that are critical for . The row-complexity of relative to is the maximum complexity of a row of relative to , and the row-complexity of relative to , denoted , is the supremum of the row-complexities of the matrices in relative to . When is finite, we say that is row-bounded relative to and is row-bounding, otherwise is row-unbounded relative to .

Notice that in the definition of , we are taking supremum over all the matrices in , not just the critical ones. This makes the definition more convenient to work with, but it does not make any substantial difference. In fact, for a pattern with a row-bounding 1-entry , the row-complexity relative to in is maximized by a critical -avoider. To see this, suppose that is a -avoiding matrix, is any critical -avoiding matrix that dominates , and is a 0-entry of that is critical for ; then is necessarily also a 0-entry in , and is still critical for  in . Therefore, the row-complexity of relative to is at least as large as the row-complexity of relative to .

Observe that the following inequalities hold for any pattern :

 maxe∈supp(P)r(Av≼(P),e)≤r(Av≼(P))≤∑e∈supp(P)r(Av≼(P),e).

In particular, a pattern is row-bounding if and only if each 1-entry of is row-bounding.

Lemma 3.7.

Let be a pattern, and let be a -avoiding matrix. Let be a horizontal 0-run of , and let be a 0-entry in this 0-run. Assume that there is an embedding of into . Then has a 1-entry mapped by to , and moreover, every entry of in the same column as is mapped by to a column containing an entry from .

Proof.

Clearly, must map a 1-entry of to the entry , otherwise would also be an embedding of into and would not be -avoiding.

Suppose now that for a row and columns . Let be an entry of in the same column as . Suppose that maps to an entry in column , with . Assume that , the case being analogous. Then we may modify to map to the 1-entry instead of , obtaining an embedding of into , which is a contradiction. ∎

Criteria for relative row-boundedness.

Let us first point out a trivial but useful fact: if is a pattern obtained from a pattern by reversing the order of rows (i.e., turning upside down) then a 1-entry of is row-bounding if and only if the corresponding 1-entry of is row-bounding. Analogous properties hold for reversing the order of columns or 180-degree rotation. Similarly, operations that map rows to columns, such as transposition or 90-degree rotation, will map row-bounding 1-entries to column-bounding ones and vice versa.

We will now state several general criteria for row-boundedness of 1-entries, which we will later use to show that any -avoiding pattern is row-bounding.

Lemma 3.8.

If is a pattern with a row and a column such that , then every 1-entry of in the interval is row-bounding (see Figure 6).

Proof.

Let be a 1-entry of with . Let be a -avoider, let be a 0-entry of critical for , and let be the horizontal 0-run containing .

We claim that in the row of , there are fewer than 1-entries to the left of . Suppose this is not the case, i.e., row contains distinct 1-entries , numbered left to right, all of them to the left of .

Let be an embedding of into which maps to . Recall from Lemma 3.7 that all the entries in column of are mapped to columns intersecting . In particular, all the entries from column are mapped to the right of .

We define a partial embedding of into , as follows. Firstly, maps the entries of to the 1-entries of . Next, maps each 1-entry of that is not among to the same entry as . We easily see that is a partial embedding of into , a contradiction.

Therefore, there are fewer than 1-entries in row to the right of , and hence row has at most 0-runs critical for . Consequently, and is row-bounding. ∎

The assumptions of Lemma 3.8 are satisfied when is the leftmost nonempty column of a pattern and is an arbitrary row. We state this important special case as a separate corollary.

Corollary 3.9.

Any 1-entry in the leftmost nonempty column of a pattern is row-bounding.

Lemma 3.10.

Let be a pattern with a row , and two distinct columns , such that all the 1-entries of in row belong to the interval . Moreover, if is a column index with , then has no 1-entry in column except possibly for the entry . Suppose furthermore that satisfies one of the following three conditions (see Figure 6):

• All the 1-entries of above row are in a single row , and all the 1-entries below row are in a single row .

• All the 1-entries of above row are in a single row , and all the 1-entries below row are in the submatrix .

• All the 1-entries of above row are in the submatrix , and all the 1-entries below row are in the submatrix .

Then every 1-entry in the interval is row-bounding.

Proof.

Let  be a pattern satisfying the assumptions, and let . We will show that for each 1-entry  of and every -