A cost-scaling algorithm for computing the degree of determinants

08/26/2020 ∙ by Hiroshi Hirai, et al. ∙ The University of Tokyo 0

In this paper, we address computation of the degree deg Det A of Dieudonné determinant Det A of A = ∑_k=1^m A_k x_k t^c_k, where A_k are n × n matrices over a field 𝕂, x_k are noncommutative variables, t is a variable commuting x_k, c_k are integers, and the degree is considered for t. This problem generalizes noncommutative Edmonds' problem and fundamental combinatorial optimization problems including the weighted linear matroid intersection problem. It was shown that deg Det A is obtained by a discrete convex optimization on a Euclidean building. We extend this framework by incorporating a cost scaling technique, and show that deg Det A can be computed in time polynomial of n,m,log_2 C, where C:= max_k |c_k|. We apply this result to an algebraic combinatorial optimization problem arising from a symbolic matrix having 2 × 2-submatrix structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Edmonds’ problem [4] asks to compute the rank of a matrix of the following form:

(1.1)

where are matrices over field , are variables, and is considered as a matrix over rational function field . This problem is motivated by a linear algebraic formulation of the bipartite matching problem and other combinatorial optimization problems. For a bipartite graph , consider , where denotes the matrix having only for the -entry. Then is equal to the maximum size of a matching of . Other basic classes of combinatorial optimization problems have such a rank interpretation. For example, the linear matroid intersection problem corresponds to with rank-1 matrices , and the linear matroid matching problem corresponds to

with rank-2 skew symmetric matrices

; see [22].

Symbolical treatment of variables makes the problem difficult, whereas the rank computation after substitution for

is easy and it provides the correct value in high probability. A randomized polynomial time algorithm is obtained by this idea 

[21]. A deterministic polynomial time algorithm for Edmonds’ problem is not known, and is one of important open problems in theoretical computer science.

A recent approach to Edmonds’ problem, initiated by Ivanyos et al. [9], is to consider variables to be noncommutative. That is, the matrix is regarded as a matrix over noncommutative polynomial ring . The rank of is well-defined via embedding to the free skew field . The resulting rank is called the noncommutative rank (nc-rank) of and is denoted by . Interestingly, admits a deterministic polynomial time computation:

Theorem 1.1 ([7, 10]).

for a matrix of form (1.1) can be computed in time polynomial of .

The algorithm by Garg et al. [7] works for , and the algorithm by Ivanyos et al. [10] works for an arbitrary field . Another polynomial time algorithm for nc-rank is obtained by Hamada and Hirai [11], while the bit-length of this algorithm may be unbounded if . By the formula of Fortin and Reutenauer [5],

is obtained by an optimization problem defined on the family of vector subspaces in

. The above algorithms deal with this new type of an optimization problem. It holds , where the inequality can be strict in general. For some class of matrices including linear matroid intersection, holds, and the Fortin-Reutenauer formula provides a combinatorial duality relation. This is basically different from the usual derivation by polyhedral combinatorics and LP-duality.

In the view of combinatorial optimization, rank computation corresponds to cardinality maximization. The degree of determinants is an algebraic correspondent of weighted maximization. Indeed, the maximum-weight of a perfect matching of a bipartite graph is equal to the degree of the determinant of , where is a new variable, are edge-weights, and the degree is considered in . Therefore, the weighed version of Edmonds’ problem is computation of the degree of the determinant of a matrix of form (1.1), where each is a polynomial matrix with variable .

Motivated by this observation and the above-mentioned development, Hirai [12] introduced a noncommutative formulation of the weighted Edmonds’ problem. In this setting, the determinant is replaced by the Dieudonné determinant  [3] — a determinant concept of a matrix over a skew field. For our case, is viewed as a matrix over the skew field of rational functions with coefficients in . Then the degree with respect to is well-defined. He established a formula of generalizing the Fortin-Reutenauer formula for , a generic algorithm (Deg-Det) to compute , and relation for weighted linear matroid intersection problem. In particular, is obtained in time polynomial of , , the maximum degree of matrix with respect to , and the time complexity of solving the optimization problem for nc-rank. Although the required bit-length is unknown for , Oki [23] showed another polynomial time reduction from to with bounding bit-length.

In this paper, we address the computation of matrices of the following special form:

(1.2)

where are matrices over and are integers. This class of matrices is natural from the view of combinatorial optimization. Indeed, the weighted bipartite matching and weighted linear matroid intersection problems correspond to of such matrices. Now exponents of variable work as weights or costs. In this setting, the above algorithms [12, 23] are pseudo-polynomial. Therefore, it is natural to ask for computation with polynomial dependency in . The main result of this paper shows that such a computation is indeed possible.

Theorem 1.2.

Suppose that arithmetic operations over are done in constant time. Then for a matrix of (1.2) can be computed in time polynomial of , where .

For a more general setting of “sparse” polynomial matrices, such a polynomial time computation seems difficult, since it can solve (commutative) Edmonds’ problem [23].

Our algorithm for Theorem 1.2 is based on the framework of [12]; hence the required bit-length is unknown for . In this framework, is formulated as a discrete convex optimization on the Euclidean building for . The Deg-Det algorithm is a simple descent algorithm on the building, where discrete convexity property (L-convexity) provides a sharp iteration bound of the algorithm via geometry of the building. We incorporate cost scaling into the Deg-Det

algorithm, which is a standard idea in combinatorial optimization. To obtain the polynomial time complexity, we need a polynomial sensitivity estimate for how an optimal solution changes under the perturbation

. We introduce a new discrete convexity concept, called N-convexity, that works nicely for such cost perturbation, and show that the objective function enjoys this property, from which a desired estimate follows. This method was devised by [14] in another discrete convex optimization problem on a building-like structure.

As an application, we consider an algebraic combinatorial optimization problem for a symbolic matrix of form

(1.3)

where is a matrix over for . We call such a matrix a -partitioned matrix. Rank computation of this matrix is viewed as a “2-dimensional” generalization of the bipartite matching problem. The duality theorem by Iwata and Murota [17] implies relation. Although can be computed by the above-mentioned nc-rank algorithms, the problem has a more intriguing combinatorial nature. Hirai and Iwamasa [15] showed that is equal to the maximum size of a certain algebraically constrained -matching (-consistent -matching) on a bipartite graph, and they developed an augmenting-path type polynomial time algorithm to obtain a maximum -consistent -matching. We apply our cost-scaling framework for a -partitioned matrix with replaced by , and obtain a polynomial time algorithm to solve the weighted version of this problem and to compute . This result sheds an insight on polyhedral combinatorics, since it means that linear optimization over the polytope of -consistent -matchings can be solved without knowledge of its LP-formulation.

Related work.

A matrix of (1.1) corresponding to the linear matroid matching problem (i.e., each is a rank-2 skew symmetric matrix) is a representative example in which rank and nc-rank can be different. Accordingly, and can differ for a matrix of (1.2) with rank-2 skew symmetric matrices . The computation of of such a matrix is precisely the weighted linear matroid matching problem. Camerini et al. [1] utilized this formulation and random substitution to obtain a random pseudo-polynomial time algorithm solving the weighted linear matroid matching, where the running time depends on . Cheung et al. [2] speeded up this algorithm, and also obtained a randomized FPTAS by using cost scaling. Recently, Iwata and Kobayashi [16] finally developed a polynomial time algorithm solving the weighted linear matroid matching problem, where the running time does not depend on . The algorithm also uses a similar (essentially equivalent) formulation, and is rather complicated. A simplified polynomial time algorithm, possibly using cost scaling, is worthy to be developed, in which the results in this paper may help.

Organization.

The rest of this paper is organized as follows: In Section 2, we give necessary arguments on nc-rank, Dieudonné determinant, Euclidean building, and discrete convexity. Our argument is elementary; no prior knowledge is assumed. In Section 3, we present our algorithm for Theorem 1.2. In Section 4, we describe the results on -partitioned matrices.

2 Preliminaries

Let , , and denote the sets of reals, rationals, and integers, respectively. Let denote the -th unit vector. For , let denote the 0,1 vector in which the first components are and the others are zero, i.e., . For a ring , let denote the set of matrices over having inverse . The degree of a polynomial with is defined as . The degree of a rational with polynomials is defined as . The degree of the zero polynomial is defined as .

2.1 Nc-rank and the degree of Dieudonné determinant

Instead of giving necessary algebraic machinery, we simply regard the following formula by Fortin and Reutenauer [5] as the definition of the nc-rank.

Theorem 2.1 ([5]).

Let be a matrix of form (1.1). Then is equal to the optimal value of the following problem:

has an zero submatrix,
Theorem 2.2 ([10]).

An optimal solution in (R) can be computed in polynomial time.

Notice that the algorithm by Garg et al. [7] obtains the optimal value of (R) but does not obtain optimal , and that the algorithm by Hamada and Hirai [11] obtains optimal but has no guarantee of polynomial bound of bit-length when .

Next we consider the degree of the Dieudonné determinant. Again we regard the following formula as the definition.

Theorem 2.3 ([12]).

Let be a matrix of form (1.2). Then is equal to the optimal value of the following problem:

A pair of matrices is said to be feasible (resp. optimal) for if it is feasible (resp. optimal) to (D) for .

A matrix over is written as a formal power series as

where is a matrix over and . For solving (D), the leading term plays an important role.

Lemma 2.4 ([12]).

Let be a feasible solution for .

  • is optimal if and only if .

  • If , then .

A self-contained proof (for regarding (D) as the definition of ) is given in the appendix.

Notice that the optimality condition (1) does not imply a good characterization (NP co-NP characterization) for , since the size of (e.g., the number of terms) may depend on pseudo-polynomially.

Lemma 2.5.

if and only if .

Proof.

We observe from (D) that and is monotone in . In particular, we may assume .

Suppose that Then we can choose such that has an zero submatrix with in the upper right corner. Then, for every , is feasible in (D) with objective value , where . This means that (D) is unbounded. Suppose that . By monotonicity, we have . Now has nc-rank , and is optimal by Lemma 2.4 (1). Then we have . ∎

2.2 Euclidean building

Here we explain that the problem (D) is regarded as an optimization over the so-called Euclidean building. See e.g., [8] for Euclidean building. Let denote the subring of consisting of elements with . Let be the subgroup of consisting of matrices over invertible in . The degree of the determinant of any matrix in is zero. Therefore transformation for keeps the feasibility and the objective value in (D). Let be the set of right cosets of in , and let be the set of left cosets.

Then (D) is viewed as an optimization over . The projection of to is denoted by , which is identified with the submodule of spanned by the row vectors of with coefficients in . In the literature, such a module is called a lattice. We also denote the projections of to by and of to by .

The space (or ) is known as the Euclidean building for . We will utilize special subspaces of , called apartments, to reduce arguments on to that on . For integer vector , denote by the diagonal matrix with diagonals , that is,

An apartment of is a subset of represented as

for some . The map is an injective map from to . The following is a representative property of a Euclidean building.

Lemma 2.6 (See [8]).

For , there is an apartment containing .

Therefore is viewed as an amalgamation of integer lattices . An apartment in is defined as a subset of form . An apartment in is the product of apartments in and in .

Restricting (D) to an apartment of , we obtain a simpler integer program:

s.t.

where . This is nothing but the (discretized) LP-dual of a weighted perfect matching problem.

We need to define a distance between two solutions and in (D). Let the -distance defined as follows: Choose an apartment containing and . Now is regarded as , and and are regarded as points and in , respectively. Then define as the -distance .

The -distance is independent of the choice of an apartment, and satisfies the triangle inequality. This fact is verified by applying a canonical retraction , which is distance-nonincreasing; see [12].

2.3 N-convexity

The Euclidean building admits a partial order in terms of inclusion relation, since lattices are viewed as submodules of . By this ordering, becomes a lattice in poset theoretic sense; see [12, 13]. Then the objective function of (D) is a submodular-type discrete convex function on , called an L-convex function [12]. Indeed, its restriction to each apartment () is an L-convex function in the sense of discrete convex analysis [20]. This fact played an important role in the iteration analysis of the Deg-Det algorithm.

Here, for analysis of cost scaling, we introduce another discrete convexity concept, called N-convexity. Since arguments reduce to that on an apartment , we first introduce N-convexity on integer lattice . For , let defined by

Let , where . Observe that -distance decreases by one when moves to . In particular, if . The sequence is called the normal path from to . Let be defined by

where .

A function is called N-convex if it satisfies

(2.1)
(2.2)

for all .

Lemma 2.7.
  • is N-convex for .

  • is N-convex for .

  • If are N-convex, then is N-convex for .

  • Suppose that is a translation , a transposition of coordinates , or the sign change of some coordinate . If is N-convex, then is N-convex.

Proof.

(1) and (3) are obvious. (4) follows from . We examine (2). The case of is clear. We next consider the case of and . Let . Choose distinct . Let (or ), and let (or ); our argument below works for both and . We may consider the case . We may assume . Then . If , then , , and , implying . Suppose that . If , then and , implying and . If , then , , and . If , then and . Otherwise , implying Thus, (2.1) and (2.2) hold for all cases.

Finally we consider the case . Let be the projection . Then . Also it is obvious that . Hence . Also observe that is equal to or . From this we have (2.2). ∎

Observe that the objective function of (D), if is feasible, and otherwise, is N-convex. A slightly modified version of this fact will be used in the proof of the sensitivity theorem (Section 3.4).

N-convexity is definable on by taking apartments. That is, is called N-convex if the restriction of to every apartment is N-convex. Hence we have the following, though it is not used in this paper explicitly.

Proposition 2.8.

The objective function of (D) is N-convex on .

In fact, operators and are independent of the choice of apartments, since they can be written by lattice operators on .

3 Algorithm

In this section, we develop an algorithm in Theorem 1.2. In the following, we assume . Indeed, by Lemma 2.5, it can be decided in advance by nc-rank computation. Also we may assume that each is a positive integer, since .

3.1 Deg-Det algorithm

We here present the Deg-Det algorithm [12] for (D), which is a simplified version of Murota’s combinatorial relaxation algorithm [18] designed for ; see also [19, Section 7.1]. The algorithm uses an algorithm of solving (R) as a subroutine. For simplicity, we assume (by multiplying permutation matrices) that the position of a zero submatrix in (R) is upper right.

Algorithm: Deg-Det
Input:

, where and for , and an initial feasible solution for .

Output:

.

1:

Solve the problem (R) for and obtain optimal matrices .

2:

If the optimal value of (R) is equal to , then output . Otherwise, letting , go to step 1.

The mechanism of this algorithm is simply explained: The matrix after step 1 has a negative degree in each entry of its upper right submatrix. Multiplying for the first rows and for the first columns does not produce the entry of degree . This means that the next solution is feasible for , and decreases by . Then the algorithm terminates after finite steps, where Lemma 2.4 (1) guarantees the optimality.

In the view of Euclidean building, the algorithm moves the point to an “adjacent” point with . Then the number of the movements ( iterations) is analyzed via the geometry of the Euclidean building. Let denote the set of (the image of) all optimal solutions for . Then the number of iterations of Deg-Det is sharply bounded by the following distance between from to :

where we regard (resp. ) as a -submodule of spanned row (resp. column) vectors. Observe that does not change the feasibility and objective value, and hence an optimal solution with always exists.

Theorem 3.1 ([12]).

The number of executions of step 1 in Deg-Det with an initial solution is equal to .

This property is a consequence of L-convexity of the objective function of (D). Thus Deg-Det is a pseudo-polynomial time algorithm. We will improve Deg-Det by using a cost-scaling technique.

3.2 Cost-scaling

In combinatorial optimization, cost-scaling is a standard technique to improve a pseudo-polynomial time algorithm to a polynomial one. Consider the following situation: Suppose that an optimal solution for costs becomes an optimal solution for costs , and that the algorithm starts from and obtains an optimal solution for costs within a polynomial number of iterations. In this case, a polynomial time algorithm is obtained by calls of .

Motivated by this scenario, we incorporate a cost scaling technique with Deg-Det as follows:

Algorithm: Cost-Scaling
Input:

, where and for .

Output:

.

0:

Let , , , and .

1:

Let for , and let .

2:

Apply Deg-Det for and , and obtain an optimal solution for .

3:

If , then output . Otherwise, letting and , go to step 1.

For the initial scaling phase , it holds for all and is an optimal solution for (by Lemma 2.4 and the assumption ).

Lemma 3.2.

is an optimal solution for , and is a feasible solution for .

The former statement follows from the observation that the optimality (Lemma 2.4 (1)) keeps under the change and . The latter statement follows from the fact that is obtained by decreasing (at most by ). The correctness of the algorithm is clear from this lemma.

To apply Theorem 3.1, we need a polynomial bound of the distance between the initial solution of the -th scaling phase and optimal solutions for . The main ingredient for our algorithm is the following sensitivity result.

Proposition 3.3.

Let be the initial solution in the -th scaling phase. Then it holds .

The proof is given in Section 3.4, in which N-convexity plays a crucial role. Thus the number of iterations of Deg-Det in step 2 is bounded by , and the number of the total iterations is .

3.3 Truncation of low-degree terms

Still, the algorithm is not polynomial, since a naive calculation makes have a pseudo-polynomial number of terms. Observe that in step 1 of Deg-Det depends only on the leading term of . Therefore it is expected that terms with large do not affect on the subsequent computation. Our polynomial time algorithm is obtained by truncating such low degree terms. Note that in the case of the weighted linear matroid intersection, i.e., each is rank-1, such a care is not needed; see [6, 12] for details.

First, we present the cost-scaling Deg-Det algorithm in the form that it updates instead of as follows:

Algorithm: Deg-Det with Cost-Scaling
Input:

, where and for .

Output:

.

0:

Let , , , for , and .

1:

Letting , solve the problem (R) for and obtain an optimal solution .

2:

Suppose that the optimal value of (R) is less than . Letting for and , go to step 1.

3:

Suppose that the optimal value of (R) is equal to . If , then output . Otherwise, letting

, and , go to step 1.

Notice that each is written as the following form:

where is a matrix over . We consider to truncate low-degree terms of after step 1. For this, we estimate the magnitude of degree for which the corresponding term is irrelevant to the final output. In the modification of step , the term splits into three terms of degree , , and . By Proposition 3.3, this modification is done at most time in each scaling phase. In the final scaling phase , the results of this phase only depend on terms of with degree at least . These terms come from the first terms of in the end of the previous scaling phase , which come from the first terms of at the beginning of the phase. They come from the first terms of the phase . A similar consideration shows that the final result is a consequence of the first terms of at the beginning of the -th scaling phase. Thus we can truncate each term of degree at most : Add to Deg-Det with Cost-Scaling the following procedure after step .

Truncation:

For each , remove from all terms for .

Now we have our main result in an explicit form:

Theorem 3.4.

Deg-Det with Cost-Scaling computes in time, where denotes the time complexity of solving (R) and denotes the exponent of the time complexity of matrix multiplication.

Proof.

The total number of calls of the oracle solving (R) is that of the total iterations . By the truncation, the number of terms of is . Hence the update of all in each iteration is done in time. ∎

3.4 Proof of the sensitivity theorem

Let and let .

Lemma 3.5.

Let be an optimal solution for . There is an optimal solution for such that , , and .

Proposition 3.3 follows from this lemma, since is obtained from by decrements of .

Let be an optimal solution for such that , , and is minimum. Suppose that . By Lemma 2.6, choose an apartment of containing and . Regard as . Then and are regarded as points and in , respectively. The inclusion order becomes vector ordering . In particular, and . Consider the problem (D) on this apartment. We incorporate the constraints to the objective function as barrier functions. Let be a large number. Define by

where range over and over . Similarly define