 # On a weighted linear matroid intersection algorithm by deg-det computation

In this paper, we address the weighted linear matroid intersection problem from the computation of the degree of the determinants of a symbolic matrix. We show that a generic algorithm computing the degree of noncommutative determinants, proposed by the second author, becomes an O(mn^3 log n) time algorithm for the weighted linear matroid intersection problem, where two matroids are given by column vectors n × m matrices A,B. We reveal that our algorithm is viewed as a "nonstandard" implementation of Frank's weight splitting algorithm for linear matroids. This gives a linear algebraic reasoning to Frank's algorithm. Although our algorithm is slower than existing algorithms in the worst case estimate, it has a notable feature: Contrary to existing algorithms, our algorithm works on different matroids represented by another "sparse" matrices A^0,B^0, which skips unnecessary Gaussian eliminations for constructing residual graphs.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Several basic combinatorial optimization problems have linear algebraic formulations. It is classically known  that the maximum cardinality of a matching in a bipartite graph with color classes is equal to the rank of the matrix , where are variables and is an matrix with if and zero otherwise. Such a rank interpretation is known for the linear matroid intersection, nonbipartite matching, and linear matroid matching problems; see .

The degree of the determinant of a polynomial (or rational) matrix is a weighted counter part of rank, and can formulate weighted versions of combinatorial optimization problems. The maximum weight perfect matching problem in a bipartite graph with integer weights corresponds to computing the degree of the determinant of the (rational) matrix . Again, the weighted linear matroid intersection, nonbipartite matching, and linear matroid matching problems have such formulations.

Inspired by the recent advance [5, 8] of a noncommutative approach to symbolic rank computation, the second author  introduced the problem of computing the degree of the Dieudonné determinant of a matrix , where are pairwise noncommutative variables and is a rational matrix with commuting variable . He established a general min-max formula for , presented a conceptually simple and generic algorithm, referred here to , for computing , and showed that holds if corresponds to an instance of the weighted linear matroid intersection problem. In particular, gives rise to a pseudo-polynomial time algorithm for the weighted linear matroid intersection problem. In the first version of the paper , the second author asked (i) whether can be a (strongly) polynomial time algorithm for the weighted linear matroid intersection, and (ii) how is related to the existing algorithms for this problem. He pointed out some connection of to the primal-dual algorithm by Lawler  but the precise relation was not clear.

The main contribution of this paper is to answer the questions (i) and (ii):

• We show that becomes an time algorithm for the weighted linear matroid intersection problem, where the two matroids are represented and given by two matrices . This answers affirmatively the first question.

• For the second question, we reveal the relation between our algorithm and the weight splitting algorithm by Frank . This gives a linear algebraic reasoning to Frank’s algorithm. We show that the behavior of our algorithm is precisely the same as that of a slightly modified version of Frank’s algorithm. Our algorithm is different from the standard implementation of Frank’s algorithm for linear matroids. This relationship was unexpected and nontrivial for us, since the two algorithms look quite different.

Although our algorithm is slower than the standard -time implementation of Frank’s algorithm in the worst case estimate, it has a notable feature. Frank’s algorithm works on a subgraph of the residual graph , where is determined by Gaussian elimination for and is determined by a splitting of the weight. On the other hand, our algorithm does not compute the residual graph , and computes a non-redundant subgraph of , which is the residual graph of different matroids represented by another “sparse” matrices . Consequently, our algorithm applies fewer elimination operations than the standard one, which will be a practical advantage.

#### Related work.

The essence of Deg-Det comes from the combinatorial relaxation algorithm by Murota , which is an algorithm computing the degree of the (ordinary) determinant of a polynomial/rational matrix; see [10, Section 7.1].

Several algorithms have been proposed for the general weighted matroid intersection problem under the independence oracle model; see e.g., [14, Section 41.3] and the references therein. For linear matroids given by two matrices, the current fastest algorithms (as far as we know) are an -time implementation of Frank’s algorithm using fast matrix multiplication and an -time algorithm by Gabow and Xu , where is the maximum absolute value of weights . Huang, Kakimura, and Kamiyama  gave an -time algorithm, which is faster for the case of small .

#### Organization.

The rest of this paper is organized as follows. In Section 2, we introduce algorithm , and describe basics of the unweighted (linear) matroid intersection problem from a linear algebraic viewpoint; our algorithm treats the unweighed problem as a subproblem. In Section 3, we first formulate the weighted linear matroid intersection problem as the degree of the determinant of a rational matrix , and show that computes correctly. Then we present our algorithm by specializing , analyze its time complexity, and reveal its relationship to Frank’s algorithm.

## 2 Preliminaries

### 2.1 Notation

Let and denote the sets of rationals and integers, respectively. Let denote the zero vector. For , let denote the characteristic vector of , that is, if and otherwise. Here, is simply denoted by .

For a polynomial with , the degree with respect to is defined as . The degree of a rational function with polynomials is defined as .

A rational function is called proper if . A rational matrix is called proper if each entry of is proper. For a proper rational matrix , there is a unique matrix over , denoted by , such that

 Q=Q0+t−1Q′,

where is a proper matrix.

For an integer vector , let denote the diagonal matrix having diagonals in order, that is,

 (tα)=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝tα1tα2⋱tαn⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

For a matrix and , let denote the submatrix of consisting of the -th columns for . Additionally, for , let denote the submatrix of consisting of the -entries for .

### 2.2 Algorithm Deg-Det

Given rational matrices , consider the following matrix

 M:=M1x1+M2x2+⋯+Mmxm∈Q(t,x1,x2,…,xm),

where are variables and is regarded as a multivariate rational matrix with (pairwise commutative) variables . We address the computation of the degree of the determinant of with respect to .

Consider the following optimization problem:

 (P)Max. degdetP+degdetQ s.t. PMQ:proper, P,Q∈Q(t)n×n:nonsingular.

This problem gives an upper bound of . Indeed, if is proper, then , and . In fact, it is shown  that the optimal value of (P) is interpreted as the negative of the degree of the Dieudonné determinant of for the case where are pairwise noncommutative variables.

The following algorithm for (P) is due to , which is viewed as a simplification of the combinatorial relaxation algorithm by Murota ; see also [10, Section 7.1].

Algorithm: Deg-Det
Input:

, where for .

Output:

An upper bound of (the negative of an optimal value of (P)).

0:

Let and , where is the maximum degree of entries in . Let .

1:

Solve the following problem:

 (P0)Max. r+s s.t. K(PMQ)0L has an r×s zero submatrix, K,L∈Qn×n:nonsingular,

and obtain optimal matrices ; recall the notation in Section 2.1.

2:

If the optimal value is at most , then stop and output .

3:

Let and be the sets of row and column indices, respectively, of the zero submatrix of . Find the maximum integer such that is proper.

Let and . If is unbounded, then output . Go to step 1 otherwise.

Observe that in each iteration is a feasible solution of (P), and equals . Thus, (P) actually gives an upper bound of . We are interested in the case where the algorithm outputs correctly.

###### Lemma 2.1 ().

In step 2 of Deg-Det, the following holds:

• If , then is singular (over ).

• If is nonsingular, then and .

###### Proof.

(1). It is obvious that any matrix is singular if it has an zero submatrix with .

(2). is written as for a proper . If is nonsingular, then , and hence . ∎

### 2.3 Algebraic formulation for linear matroid intersection

Let be an matrix over . Let denote the linear matroid represented by . Specifically, the ground set of is the set of the column indices, and the family of independent sets of consists of subsets such that the corresponding column vectors are linearly independent. Let denote the rank function of , that is, . A minimal (linearly) dependent subset is called a circuit. See, e.g., [14, Chapter39] for basics on matroids.

Suppose that we are given another matrix . Let be the corresponding linear matroid. A common independent set of and is a subset such that is independent for both and . The linear matroid intersection problem is to find a common independent set of the maximum cardinality. To formulate this problem linear algebraically, define an matrix by

 M:=m∑i=1aib⊤ixi,

where are variables. The following is the matroid intersection theorem and its linear algebraic sharpening.

The following quantities are equal:

• The maximum cardinality of a common independent set of and .

• The minimum of over .

• .

• minus the maximum of such that has an zero submatrix for some nonsingular matrices .

###### Sketch of Proof.

(1) (2) is nothing but the matroid intersection theorem.

(1) (1). A submatrix of is represented by , where are the corresponding submatrices of , and is the diagonal matrix with diagonals (in order). From Binet-Cauchy formula, we see that if and only if there is a -element subset such that . Thus, if and only if there is a common independent set of size .

(2) (2). Take a basis of the orthogonal complement of the vector space spanned by , and extend it to a basis of , where . Similarly, take a basis of that contains a basis of the orthogonal complement of the vector space spanned by , where . Then for all , , and . This means that has an zero submatrix for and .

(2) (1). If has an zero submatrix, then . ∎

Let us briefly explain Edmonds’ algorithm to obtain a common independent set of the maximum cardinality. For any common independent set , the auxiliary (di)graph is defined as follows. The set of nodes of is equal to the ground set of the matroids, and the set of arcs is given by: if and only if one of the following holds:

• , , and belong to a circuit of .

• , , and belong to a circuit of .

Let denote the subset of nodes such that is independent in , and denote the subset of nodes such that is independent in . See Figure 1 for , , and .

###### Lemma 2.3 ().

Let be a common independent set, and let be the set of nodes reachable from in .

• Suppose that . For a shortest path from to , the set is a common independent set with .

• Suppose that . Then is a maximum common independent set and attains the .

Here denotes the symmetric difference. According to this lemma, Edmonds’ algorithm is as follows:

• Find a shortest path in from to (by BFS).

• If it exists, then replace by , and repeat. Otherwise, is a common independent set of the maximum cardinality.

In our case, the auxiliary graph and optimal matrices in (2) are naturally obtained by applying elementary row operation to matrices as follows. Since is a common independent set, both and have column full rank . Therefore, by multiplying nonsingular matrices and to and from left, respectively, we can make and diagonal in the position , that is, for some injective map , it holds if and zero otherwise. Such matrices are said to be -diagonal. Notice that these operations do not change the matroids and . See Figure 3, where the columns and rows are permuted appropriately.

Then the auxiliary graph is constructed from the nonzero patterns of and as follows. For , arc (resp. ) exists if and only if (resp. ). Additionally, (resp. ) consists of nodes with (resp. ) for some .

Moreover, in the case where , the matrices attain the maximum in (2). Indeed, define , , and by

 I∗:=[m]∖σ(X), (2.1) J∗:=[m]∖σ(X), (2.2) I:=σ(R∩X)∪I∗, (2.3) J:=σ(R∖X)∪J∗. (2.4)

Then the submatrix is an zero submatrix as in Figure 3.

## 3 Algorithm

In this section, we consider the weighted linear matroid intersection problem. In Section 3.1, we formulate the problem as the computation of the degree of the determinant of a rational matrix associated with given two linear matroids and weight. In Section 3.2, we specialize Deg-Det to present our algorithm for the weighted linear matroid intersection weight splitting problem. Its time complexity is analyzed in Section 3.3, and its relation to Frank’s algorithm is discussed in Section 3.4.

### 3.1 Algebraic formulation of weighted linear matroid intersection

Let be matrices over as in Section 2.3, and let and be the associated linear matroids on . We assume that both and have no zero columns. In addition to , we are further given integer weights for . The goal of the weighted linear matroid intersection problem is to maximize the weight over all common independent sets .

Here we consider a restricted situation when the maximum is taken over all common independent sets of cardinality . In this case, the maximum weight is interpreted as the degree of the determinant of the following rational matrix. Let be an rational matrix defined by

 M:=m∑i=1aib⊤ixitci.
###### Lemma 3.1.

is equal to the maximum of the weight over all common independent sets of cardinality .

###### Proof.

As in the proof of Theorem 2.2, by Binet-Cauchy formula applied to , we obtain , and

 degdetM=max{c(X)∣X⊆[m]:detA[X]det[X]≠0}.

###### Lemma 3.2 ().

For the setting , the algorithm Deg-Det outputs .

###### Proof.

Consider step 2 of Deg-Det. Here is also written as for some ; see the next subsection. In particular, . Therefore, by Theorem 2.2, is nonsingular if and only if . Thus, if the algorithm terminates, then is nonsingular and by Lemma 2.1. ∎

### 3.2 Algorithm description

Here we present our algorithm by specializing Deg-Det. The basic idea is to apply Edmonds’ algorithm to solve the problem (P) for . We first consider the case where and are diagonal matrices represented as and for some . In this case, is explicitly written as follows. Observe that the properness of is equivalent to

 αk+βℓ+ci≤0(i∈[m],k,ℓ∈[n]:(ai)k(bi)ℓ≠0). (3.1)

For , define by

 (a0i)k:={(ai)kif ∃ℓ∈[n],(ai)k(bi)ℓ≠0,αk+βℓ+ci=0,0otherwise, (3.2) (b0i)ℓ:={(bi)ℓif ∃k∈[n],(ai)k(bi)ℓ≠0,αk+βℓ+ci=0,0otherwise. (3.3)

Then . Namely we have

 (PMQ)0=m∑i=1a0ib0i⊤xi.

Therefore the step 1 of Deg-Det can be executed by solving the unweighted linear matroid intersection problem for two matroids and , where the matrices are defined by

 A0:=(a01 a02 ⋯ a0m), B0:=(b01 b02 ⋯ b0m).

Suppose that we are given a common independent set of and . According to Edmonds’ algorithm (given after Lemma 2.3), construct the residual graph with node sets and . Then we can increase or obtain that are optimal to the problem (P). A key observation here is that and commute and , respectively:

 K(tα)=(tα)K, L(tβ)=(tβ)L. (3.4)

Indeed, by the definition (3.2), (3.3), if and are nonzero, then must hold. Therefore, each elementary row operation for is done between rows with . Consequently, the commutation (3.4) hold.

Hence, by updating and , we can keep the form in the next iteration. Now the algorithm is as follows.

Algorithm: Deg-Det-WMI
Input:

matrices , , and weights .

Output:

for .

0:

, and .

1:

If , then output and stop. Otherwise, according to (3.2), (3.3) decompose as , . Apply elementary row operations to so that are -diagonal forms.

2:

From , construct the residual graph and node sets . Let be the set of nodes reachable from in .

2-1. If :

Taking a shortest path from to , let , and go to step 1.

2-2. If :

Then determines the zero submatrix of maximum size by (2.3) and (2.4); see also Figures 3 and 3. Letting , , increase from until a nonzero entry appears in the zero submatrix. If or , then output and stop. Otherwise go to step 1.

It is clear that is always a common independent set of and and that the algorithm correctly outputs .

Moreover, is a common independent set of and having the maximum weight among all common independent sets of cardinality . We show this fact by using the idea of weight splitting .

###### Lemma 3.3.

In step 1, define weight splitting for each by

 c1i := ci−c2i, (3.5) c2i := −max{βℓ∣ℓ∈[n]:(bi)ℓ≠0}. (3.6)

Then is a common independent set of and such that and . Thus maximizes the weight over all common independent sets of size .

###### Proof.

We first verify that is a common independent set of and . We may assume . Since is commonly independent of and , we can assume that in the -diagonal forms. Then . We can further assume that and . Necessarily and are lower-triangular matrices with nonzero diagonals. Hence is commonly independent in and .

Next we make some observations to prove the statement. Observe from the definition (3.5) (3.6) and the properness (3.1) that

 c1i ≤ −αk(∀k:(ai)k≠0), (3.7) c2i ≤ −βℓ(∀ℓ:(bi)ℓ≠0), (3.8)

and

 c1i=−αk,c2i=−βℓ(∃k,ℓ:(a0i)k(b0i)ℓ≠0). (3.9)

We also observe

 maxk∈[n]αk=αk′(k′∈I∗),maxℓ∈[n]βℓ=βℓ′(ℓ′∈J∗). (3.10)

This follows from the way of update , with the initialization , of the algorithm, and the fact that both and monotonically decrease.

Finally we prove that maximizes both weights and . It suffices to show

 c1(X) ≥ c1(X∪{i}∖{j})(i∉X,j∈X:X∪{i}∖{j}∈I(A)), (3.11) c2(X) ≥ c2(X∪{i}∖{j})(i∉X,j∈X:X∪{i}∖{j}∈I(B)). (3.12)

Indeed, this is the well-known optimality criterion of the maximum(minimum) weight base problem on a matroid. Take with . If there is a nonzero element for , then by (3.7) and (3.10) it holds , and (3.11) holds. Suppose not. Let be the smallest index such that . Then . Now is lower triangular. Additionally, by (3.10),

is a zero matrix. Therefore, it must hold

for to belong to a circuit in . Hence, , where the first equality follows from (3.9) and . Thus (3.11) holds. (3.12) is similarly shown. ∎

### 3.3 Analysis

We analyze the time complexity of Deg-Det-WMI. It is obvious that if (step 2-1) occurs, then the rank of increases. Therefore the algorithm goes step 2-1 at most times. The main analysis concerns step 2-2, particularly, how nonzero entries appear, how they affect , , and , and how many times these scenarios occur until .

As increases, the submatrix becomes a zero block, since the degree of each element decreases. Accordingly, and become zero blocks; see Figure 4.

Then, in , all arcs entering disappear. In particular, , , and do not change.

Next we analyze the moment when a non-zero element appears in

. Then, in the next step 1, it holds

 (a0i)k(b0i)ℓ≠0

for some , , . In this case, a new nonzero element appears in the -th column of or .

(a-1)

If and : In the next step 1, Gaussian elimination for makes the new nonzero element zero. Since , this does not affect . Therefore is still reachable from . There may appear nonzero elements in , which will make or larger in the next step 2.

(a-2)

If and : By , if , then is included to . Otherwise there appears an arc in from to . By , if , then belongs to . Otherwise there is an arc from to . Then, becomes nonempty for the former case. increases for the latter case.

(b-1)

If and : Similar to the analysis of (a-1) above, Gaussian elimination for makes zero, and and increase or do not change.

(b-2)

If and : By , if , then is included to . Otherwise there appears an arc from to . If , then becomes nonempty. Otherwise increases.

Therefore, if the case (a-2) or (b-2) occurs, then or increases. After occurrences of the cases (a-2) and (b-2), becomes nonempty and increases. When is updated, Gaussian elimination constructs the -diagonal forms of in time.

We analyze the occurrences of (a-1) and (b-1). When for becomes nonzero, it is eliminated by the row operation, and never becomes nonzero. Therefore, (a-1) and (b-1) occur at most time until is updated, where the row operation is executed in time par each occurrence. The total time for the elimination is . The augmentation and the identification of the next nonzero elements are computed in time by searching nonzero elements in , which is needed when one of (a-1), (a-2), (b-1), and (b-2) occurs. Thus, by the naive implementation, Deg-Det-WMI runs in time.

We improve this complexity to as follows. Observe first that is given by

 κ=−max{ci+αk+βℓ∣i∈[m],k∈I,ℓ∈J:(ai)k(bi)ℓ≠0}.

The main idea is to sort indices according to and keep in a binary heap potential indices that attain . Notice that even if