1. Introduction
Determining if an object can be decomposed as the ‘product’ of two simpler objects is a ubiquitous theme in mathematics and computer science. For example, every integer has a unique factorization into primes, and every finite abelian group is the direct sum of cyclic groups. Moreover, algorithms to efficiently find such ‘factorizations’ are widely studied, since many algorithmic problems are easy on indecomposable instances. In this paper, our objects of interest are matrices and polytopes.
For a matrix , we let be the th column of . The product of and is the matrix such that for each ,
where and satisfy . For example,
Two matrices are isomorphic if one can be obtained from the other by permuting rows and columns. A matrix is a 1product if there exist two nonempty matrices and such that is isomorphic to . The following is our first main result.
Theorem 1.
Given , there is an algorithm that is polynomial in which correctly determines if is a 1product and, in case it is, outputs two matrices such that is isomorphic to .
A straightforward implementation of our algorithm would run in time. However, below we do not explicitly state the running times of our algorithms nor try to optimize them.
The proof of Theorem 1 is by reduction to symmetric submodular function minimization using the concept of mutual information from information theory. Somewhat surprisingly, we do not know of a simpler proof of Theorem 1.
Our main motivation for Theorem 1 is geometric. If and are polytopes, then their Cartesian product is the polytope .
Notice that if is given by an irredundant inequality description, determining if for some polytopes amounts to determining whether the constraint matrix can be put in block diagonal structure. If is given as a list of vertices, then the algorithm of Theorem 1 determines if is a Cartesian product.
Furthermore it turns out that the 1product of matrices corresponds to the Cartesian product of polytopes if we represent a polytope via its slack matrix, which we now describe.
Let , where , and . The slack matrix associated to these descriptions of is the matrix with . That is, is the slack of point with respect to the inequality .
Slack matrices were introduced in a seminal paper of Yannakakis [yannakakis1991expressing], as a tool for reasoning about the extension complexity of polytopes (see [conforti2013extended]).
Our second main result is the following corollary to Theorem 1.
Theorem 2.
Given a polytope represented by its slack matrix , there is an algorithm that is polynomial in which correctly determines if is affinely equivalent to a Cartesian product and, in case it is, outputs two matrices such that is the slack matrix of , for .
Some comments are in order here. First, our algorithm determines whether a polytope is affinely equivalent to a Cartesian product of two polytopes. As affine transformations do not preserve the property of being Cartesian product, this is a different problem than that of determining whether equals for some polytopes . Second, the definition of 1product can be extended to a more complex operation which we call 2product. Theorems 1 and 2 can be extended to handle 2products, see Theorem 12.
Slack matrices are fascinating objects, that are not fully understood. For instance, given a matrix , the complexity of determining whether is the slack matrix of some polytope is open. In [gouveia2013nonnegative], the problem has been shown to be equivalent to the Polyhedral Verification Problem (see [kaibel2003some]): given a vertex description of a polytope , and an inequality description of a polytope , determine whether .
Polytopes that have a valued slack matrix are called level polytopes. These form a rich class of polytopes including stable set polytopes of perfect graphs, Birkhoff, and Hanner polytopes (see [aprile2018thesis, aprile20182, macchia2018two] for more examples and details). We conjecture that slack matrix recognition is polynomial for level polytopes.
Conjecture 3.
Given , there is an algorithm that is polynomial in which correctly determines if is the slack matrix of a polytope.
Conjecture 3 seems hard to settle: however it has been proven for certain restricted classes of level polytopes, most notably for stable set polytopes of perfect graphs [aprile2018thesis]. As a final result, we apply Theorem 1 and its extension to 2products to show that Conjecture 3 holds for level matroid base polytopes (precise definitions will be given later).
Theorem 4.
Given , there is an algorithm that is polynomial in which correctly determines if is the slack matrix of a level matroid base polytope.
Paper Outline. In Section 2 we study the properties of 1products and 2products in terms of slack matrices, proving Lemmas 6, 7. In Section 3 we give algorithms to efficiently recognize 1products and 2products (Theorems 1, 12), as well as showing a unique decomposition result for 1products (Lemma 11). Finally, in Section 4 we apply the previous results to slack matrices of matroid base polytopes, obtaining Theorem 4.
The results presented in this paper are contained in the PhD thesis of the first author [aprile2018thesis], to which we refer for further details.
2. Properties of 1products and 2products
Here we study the product of matrices defined above in the introduction, as well as the product. We remark that the notion of product and the related results can be generalized to products for every (see [aprile2018thesis] for more details). The product operation is similar to that of glued product of polytopes in [margot1995composition], except that the latter is defined for 0/1 polytopes, while we deal with general matrices.
We show that, under certain assumptions, the operations of  and product preserve the property of being a slack matrix. We recall the following characterization of slack matrices, due to [gouveia2013nonnegative].
Theorem 5 (Gouveia et al. [gouveia2013nonnegative]).
Let be a nonnegative matrix of rank at least 2. Then is the slack matrix of a polytope if and only if . Moreover, if is the slack matrix of polytope then is affinely equivalent to .
Throughout the paper, we will assume that the matrices we deal with are of rank at least , so to apply Theorem 5 directly.
We point out that the slack matrix of a polytope is not unique, as it depends on the given descriptions of . We say that a slack matrix is nonredundant if its rows bijectively correspond to the facets of and its columns bijectively correspond to the vertices of . In particular, nonredundant slack matrices do not contain two identical rows or columns, nor rows or columns which are all zeros, or all nonzeros. They are unique up to permuting rows and columns, and scaling rows with positive reals.
2.1. products
We show that the 1product operation preserves the property of being a slack matrix.
Lemma 6.
Let and let for such that . Matrix is the slack matrix of a polytope if and only if there exist polytopes , such that is the slack matrix of and is affinely equivalent to .
Proof.
For , let . From Theorem 5, and since , it suffices to prove (i) and (ii)
We first prove (i). We have since the righthand side is an affine subspace containing . Now, we prove . Take . Then is an affine combination of points . Similarly, is an affine combination of points . Thus we can write as
where . Hence, . Moreover, if for all , then the multipliers are all nonnegative, which proves (ii). ∎
2.2. 2products
We now define the operation of 2product, and show that, under certain natural assumptions, it also preserves the property of being a slack matrix.
Consider two real matrices , and assume that (resp. ) has a 0/1 row (resp. ), that is, a row whose entries are 0 or 1 only. We call special rows. For any matrix and row of , we denote by the matrix obtained from by removing row . The row determines a partition of into two submatrices according to its 0 and 1 entries: we define to be the matrix obtained from by deleting the row and all the columns whose entry is 1, and is defined analogously. Thus,
Similarly, induces a partition of into . Here we assume that none of is empty, that is, we assume that the special rows contain both 0’s and 1’s.
The product of with special row and with special row is defined as:
Similarly as before, we say that is a 2product if there exist matrices and 0/1 rows of , of , such that is isomorphic to . Again, we will abuse notation and write .
For a polytope with slack matrix , consider a row of corresponding to an inequality that is valid for . We say that is 2level with respect to , and that is 2level with respect to , if there exists a real such that all the vertices of
either lie on the hyperplane
or the hyperplane .We notice that, if is 2level, then can be assumed to be 0/1 after scaling. Moreover, adding to the row (that is, the complement of 0/1 row ) gives another slack matrix of . Indeed, such row corresponds to the valid inequality .
The latter observation is crucial for our next lemma: we show that, if the special rows are chosen to be 2level, the operation of 2product essentially preserves the property of being a slack matrix. We remark that having a 2level row is a quite natural condition. For instance, for 0/1 polytopes, any nonnegativity constraint yields a 2level row in the corresponding slack matrix. By definition, all facetdefining inequalities of a 2level polytope are 2level. Finally, we would like to mention that the following result could be derived from results from [margot1995composition] (see also [conforti2016projected]), but we give here a new, direct proof.
Lemma 7.
Let and let for such that for some 2level rows of , of . The following hold:

[(i)]

If both and are slack matrices, then is a slack matrix.

If is a slack matrix, let (that is, with the additional row ), and similarly let . Then both and are slack matrices.
Proof.
(i) Let for . Recall that is the slack matrix of , by Theorem 5. Without loss of generality, and can be assumed to be the first rows of respectively. We overload notation and denote by the first coordinate of as a point in , and similarly for . Let denote the hyperplane of defined by the equation .
We claim that is a slack matrix of the polytope . By Lemma 6, is a submatrix of the slack matrix of . But the latter might have some extra columns: hence we only need to show that intersecting with does not create any new vertex.
To this end we notice that no new vertex is created if and only if there is no edge of such that intersects in its interior. Let be an edge of , and let and denote its endpoints, where and . By a wellknown property of the Cartesian product, or . Suppose that does not lie on . By symmetry, we may assume that . This implies and , which in turn implies (since or ). Thus lies on the same side of as , and cannot intersect in its interior. Therefore, the claim holds and is slack matrix.
(ii) Assume that is a slack matrix. We show that is a slack matrix, using Theorem 5. The argument for is symmetric. It suffices to show that since the reverse inclusion is obvious.
Let . One has for some coefficients with , where for . We partition the index set into and , so that (resp. ) if has its entry equal to 0 (resp. 1). For simplicity, we may assume that is the first row of , and the second. Then, the first coordinate of is , and the second is . Notice that .
Now, we extend to a point by mapping each , to a column of , as follows. For each , fix an arbitrary column of , then map each with to the column of consisting of , without its second component, followed by . We denote such column by , for , and let .
We claim that . This is trivial for any component corresponding to a row of , since those are components of as well. Consider a component corresponding to a row of , and denote by the corresponding component of , for . We have:
Now, Theorem 5 applied to implies that . That is, we can write where and for and . For each , let denote the column vector obtained from by restricting to the rows of and inserting as a second component . We claim that , which implies that and concludes the proof. The claim is trivially true for all components of except for the second, for which one has since for all by definition of . ∎
3. Algorithms
In this section we study the problem of recognizing products. Given a matrix , we want to determine whether is a product, and find matrices such that . Since we allow the rows and columns of to be permuted in an arbitrary way, the problem is nontrivial.
At the end of the section, we extend our methods to the problem of recognizing products. We remark that the results in this section naturally extend to a more general operation, the product, for every constant (see [aprile2018thesis] for more details).
We begin with a preliminary observation, which is the starting point of our approach. Suppose that a matrix is a 1product . Then the rows of can be partitioned into two sets , corresponding to the rows of respectively. We write that is a 1product with respect to the partition . A column of the form , where is a column vector with components indexed by (), is a column of if and only if is a column of for each . Moreover, the number of occurrences of in is just the product of the number of occurrences of in for
. Under uniform probability distributions on the columns of
, and , the probability of picking in is the product of the probability of picking in and that of picking in . We will exploit this intuition below.3.1. Recognizing 1products via submodular minimization
First, we recall some notions from information theory, see [cover2012elements] for a more complete exposition. Let and
be two discrete random variables with ranges
and respectively. The mutual information of and is:The mutual information of two random variables measures how close is their joint distribution to the product of the two corresponding marginal distributions.
We will use the following facts, whose proof can be found in [krause2005near, cover2012elements]. Let be discrete random variables. For we consider the random vectors and , where . The function such that
(1) 
will play a crucial role.
Proposition 8.

[(i)]

For all discrete random variables and , we have , with equality if and only if and are independent.

If are discrete random variables, then the function as in (1) is submodular.
Let be an matrix. Let be a uniformly chosen random column of . That is, , where denotes the number of occurrences in of the column .
Let be defined as in (1). We remark that the definition of depends on , which we consider fixed throughout the section. The set function is nonnegative (by Proposition 8.1), symmetric (that is, ) and submodular (by Proposition 8.2).
The next lemma shows that we can determine whether is a 1product by minimizing .
Lemma 9.
Proof.
First, we prove “”. Suppose that is a product with respect to for some nonempty and proper set of row indices of . Let be the corresponding product, where for .
For any column , we have , where denotes the multiplicity of a column in , . Hence
where we used . This proves that and are independent.
We now prove “”. Let denote the different columns of the restriction of matrix to the rows in , and denote the different columns of (the restriction of matrix to the rows in ). Since and are independent, we have that, for any column of ,
where and denote multiplicities in and respectively.
Now, let denote the matrix such that . We have shown that is a nonnegative integer matrix with a rank nonnegative factorization of the form , where and , for and .
Next, one can easily turn this nonnegative factorization into an integer one. Suppose that is fractional for some . Writing as , where and are coprime, we see that divides since is integer, for every . Then the factorization is such that is integer and has at least one more integer component than . Iterating this argument, we obtain that where have nonnegative integer entries.
Finally, let be the matrix consisting of the column repeated times for , and construct from in an analogous way. Then it is immediate to see that and in particular is a 1product with respect to the row partition , which concludes the proof. ∎
Notice that the previous proof also gives a way to efficiently reconstruct , once we identified such that . In particular, if the columns of are all distinct, then reconstructing is immediate: consists of all the distinct columns of , each taken once, and is obtained analogously from . The last ingredient we need is that every (symmetric) submodular function can be minimized in polynomial time. Here we assume that we are given a polynomial time oracle to compute our function.
Theorem 10 (Queyranne [queyranne1998minimizing]).
There is a polynomial time algorithm that outputs a set such that and is minimum, where is any given symmetric submodular function.
As a direct consequence, we obtain Theorem 1.
Proof of Theorem 1.
It is clear that can be computed in polynomial time for any . It suffices then to run Queyranne’s algorithm to find minimizing . If , then is not a 1product. Otherwise, and , can be reconstructed as described in the proof of Lemma 9. ∎
We conclude the section with a decomposition result which will be useful in the next section. We call a matrix irreducible if it is not a 1product. The result below generalizes the fact that a polytope can be uniquely decomposed as a cartesian product of “irreducible” polytopes.
Lemma 11.
Let be a 1product. Then there exists a partition of such that:

is a product with respect to for all ;

for all and all proper subsets of , is not a 1product with respect to ;

the partition is unique up to permuting the labels.
In particular, if has all distinct columns, then there are matrices such that , each is irreducible, and the choice of the ’s is unique up to renaming and permuting columns.
Proof.
Let be the function defined in Equation 1. Let . Let be the minimal (under inclusion) nonempty members of . Since is nonnegative and submodular, if , then . By minimality, this implies that for all . Since is symmetric, . By Lemma 9, and satisfy (i) and (ii). Conversely, by Lemma 9, if is a partition of satisfying (i) and (ii), then are the minimal nonempty members of , which proves uniqueness.
To conclude, assume that has all distinct columns. Then as argued above each is obtained by picking each distinct column of exactly once, and it is thus unique up to permutations, once is fixed. Each is irreducible thanks to the minimality of and to Lemma 9. The fact that the ’s are unique up to renaming concludes the proof. ∎
3.2. Extension to products
We now extend the previous results to obtain a polynomial algorithm to recognize products. Recall that, if a matrix is a product, then it has a special row that divides in submatrices , which are 1products with respect to the same partition. Hence, our algorithm starts by guessing the special row, and obtaining the corresponding submatrices . Let (resp. ) denote the function as defined in (1) with respect to the matrix (), and let . Notice that is submodular, and is zero if and only if each is. Let be a proper subset of the nonspecial rows of (which are the rows of and ). It is an easy consequence of Lemma 9 that are 1products with respect to if and only if . Then is a product with respect to the chosen special rows if and only if the minimum of is zero.
Once a feasible partition is found, can be reconstructed by first reconstructing all and then concatenating them and adding the special rows. We obtained the following:
Theorem 12.
Let . There is an algorithm that is polynomial in and determines whether is a product and, in case it is, outputs two matrices and special rows of , of , such that .
In order to apply Theorem 12 to decompose slack matrices, we need to deal with a last issue. In the algorithm, it is essential to guess the special row that partitions the column set in 1products. However, in principle there might be a slack matrix that is obtained as product of other slack matrices, but where the special row is redundant. Then, deleting such row still gives a slack matrix, but we cannot recognize such matrix as product any more using our algorithm. However, the next lemma ensures that this does not happen, as long as the special rows are not redundant in the factors of the product.
Lemma 13.
Let and let for such that for some special rows of and of . Assume that are slack matrices, and that the rows , are nonredundant for respectively. Then the special row in is nonredundant as well.
Proof.
Notice that, in any slack matrix, a row is redundant if and only if its set of zeros is strictly contained in the set of zeros of another row (we write that dominates for brevity). Assume by contradiction that is redundant, and let be another row of such that dominates . Let us assume by symmetry that corresponds to a row of , i.e. consists (up to permutation) of repeated times. Then it is clear that dominates , hence is redundant in . ∎
4. Application to 2level matroid base polytopes
In this section, we use the results in Section 3 to derive a polynomial time algorithm to recognize the slack matrix of a 2level base matroid polytope.
We start with some basic definitions and facts about matroids, and we refer the reader to [oxley2006matroid] for missing definitions and details. We regard a matroid as a couple , where is the ground set of , and is its set of bases. The dual matroid of , denoted by , is the matroid on the same ground set whose bases are the complements of the bases of . An element is called a loop (respectively coloop) of if it appears in none (all) of the bases of . Given an element , the deletion of is the matroid on whose bases are the bases of that do not contain . The contraction of is the matroid on whose bases are of the form , where is a basis of that contains . A matroid is uniform if , where is the rank of . We denote the uniform matroid with elements and rank by .
Consider matroids and , with nonempty ground sets. If , the 1sum is defined as the matroid with ground set and base set . If, instead, , where is neither a loop nor a coloop in or , we let the 2sum be the matroid with ground set , and base set . A matroid is connected if it cannot be written as the 1sum of two matroids, each with fewer elements. It is well known that is connected if and only if so are and .
The base polytope of a matroid is the convex hull of the characteristic vectors of its bases. It is well known that:
where rk denotes the rank function of . It is easy to see that, if , then is the cartesian product , hence its slack matrix is a 1product thanks to Lemma 6. If , then a slightly less trivial polyhedral relation holds, providing a connection with the 2product of slack matrices. We will explain this connection below. We remark that, for any matroid , the base polytopes and are affinely equivalent via the transformation and hence have the same slack matrix.
Our algorithm is based on the following decomposition result, that characterizes those matroids such that is 2level (equivalently, such that admits a 0/1 slack matrix).
Theorem 14 ([Grande16]).
The base polytope of a matroid is level if and only if can be obtained from uniform matroids through a sequence of 1sums and 2sums.
The general idea is to use the algorithms from Theorems 1, 12 to decompose our candidate slack matrix as 1product and 2product, until each factor corresponds to the slack matrix of a uniform matroid. The latter can be easily recognized. Indeed, the base polytope of the uniform matroid is the hypersimplex . If , the (irredundant, 0/1) slack matrix of has rows and columns of the form where
Comments
There are no comments yet.