DeepAI

Fast and flexible preconditioners for solving multilinear systems

This paper investigates a type of fast and flexible preconditioners to solve multilinear system 𝒜x^m-1=b with ℳ-tensor 𝒜 and obtains some important convergent theorems about preconditioned Jacobi, Gauss-Seidel and SOR type iterative methods. The main results theoretically prove that the preconditioners can accelerate the convergence of iterations. Numerical examples are presented to reverify the efficiency of the proposed preconditioned methods.

• 1 publication
• 2 publications
06/18/2020

Kaczmarz-type inner-iteration preconditioned flexible GMRES methods for consistent linear systems

We propose using greedy and randomized Kaczmarz inner-iterations as prec...
11/13/2020

A new iterative method for solving a class of two-by-two block complex linear systems

We present an iterative method for solving the system arisen from finite...
01/26/2021

On level set type methods for elliptic Cauchy problems

Two methods of level set type are proposed for solving the Cauchy proble...
02/03/2020

On preconditioned AOR method for solving linear systems

In this paper, we investigate the preconditioned AOR method for solving ...
11/04/2019

Nonstationary iterative processes

In this paper we present iterative methods of high efficiency by the cri...
03/24/2022

A more flexible counterpart of a Huang-Kotz's copula-type

We propose a more flexible symmetric counterpart of the Huang-Kotz's cop...
07/31/2019

Fast Tensor Needlet Transforms for Tangent Vector Fields on the Sphere

This paper constructs a semi-discrete tight frame of tensor needlets ass...

1 introduction

In recent decades, Tensors or hypermatrices have been applied in many types of research and application areas such as data analysis, psychometrics, chemometrics, image processing, graph theory, Markov chains, hypergraphs, etc.

R30 . Tensor equations (or multilinear systems R5 ) involving the Einstein product have been discussed in A3 , which has many applications in continuum physics, engineering, isotropic and anisotropic elastic models R21 . Wang and Xu presented some iterative methods for solving several kinds of tensor equations in R38 , Huang and Ma, in R15 , proposed the Krylov subspace methods to solve a class of tensor equations. In R19 , Khosravi Dehdezi and Karimi proposed the extended conjugate gradient squared and conjugate residual squared methods for solving the generalized coupled Sylvester tensor equations

 n∑j=1Xj×1Aij1×2Aij2×...×dAijd=Ci,i=1,2,...,n,

where the matrices ( and ), tensors are known, tensors are unknown and is the -mode product. Also they proposed a fast and efficient Newton-Shultz-type iterative method for computing inverse and Moore-Penrose inverse of tensors in R78 .
Very recently years, solving the following multilinear system has become a hot topic because of several applications such as data analysis, engineering and scientific computing A1 ; A2 ; A3 :

 Axm−1=b (1)

where is an order -dimensional tensor, x and b

are vectors in

. The dimensional vector is defined as A4 :

 (Axm−1)i=n∑i2=1...n∑im=1aii2...imxi2...xim,   i=1,2,...,n, (2)

and denotes the -th component of x.
Many theoretical analyses and algorithms for solving (1) were also studied. Qi in A4 considered an order -dimensional supersymmetric tensor and showed that when is even it has exactly eigenvalues, and the number of its E-eigenvalues is strictly less than when . Ding and Wei in A3 proved that a nonsingular -equation with a positive right-hand side always has a unique positive solution. Also, they applied the -equations to some nonlinear differential equations and the inverse iteration for spectral radii of nonnegative tensors. In A6 , Han proposed a homotopy method for ﬁnding the unique positive solution to a multilinear system with a nonsingular -tensor and a positive right side vector. Li et al., in A10 extended the Jacobi, Gauss‐Seidel and successive over‐relaxation (SOR) iterative methods to solve the tensor equation , where is an order -dimensional symmetric tensor. Under appropriate conditions, they showed that the proposed methods were globally convergent and locally ‐linearly convergent. In A7 , He et al. proved that solving multilinear systems with -tensors is equivalent to solving nonlinear systems of equations where the involving functions are P-functions. Based on this result, they proposed a Newton-type method to solve multilinear systems with -tensors. For a multilinear system with a nonsingular -tensor and a positive right side vector, they showed that the sequence generated by the method converges to the unique solution of the multilinear system and the convergence rate is quadratic. For solving the multilinear systems, Liang et al. in A11 , transformed equivalently the tensor equation into a consensus constrained optimization problem, and then proposed an ADMM type method for it. Also, they showed that each limit point of the sequences generated by the method satisfied the Karush-Kuhn-Tucker conditions. Liu et al., in A8 , introduced the variant tensor splittings, and presented some equivalent conditions for a strong -tensor based on the tensor splitting. Also, the existence and unique conditions of the solution for multi-linear systems were given. Besides, they proposed some tensor splitting algorithms for solving multi-linear systems with coefficient tensor being a strong -tensor. As an application, a tensor splitting algorithm for solving the multi-linear model of higher-order Markov chains was proposed. Li et al., in A9 firstly derived a necessary and sufficient condition for an -tensor equation to have nonnegative solutions. Secondly, developed a monotone iterative method to find a nonnegative solution to an -tensor equation. Under appropriate conditions, they showed that the sequence of iterates generated by the method converges to a nonnegative solution of the -tensor equation monotonically and linearly. Bai et al. in A4.5 proposed an algorithm that always preserves the nonnegativity of solutions of the multilinear system under consideration involves a nonsingular -tensor and a nonnegative right-hand side vector. Also, they proved that the sequence generated by the proposed algorithm is a nonnegative componentwise nonincreasing sequence and converges to a nonnegative solution of the multilinear system. Cui et al. in A5 intended to solve the multi-linear system by the preconditioned iterative method based on tensor splitting. For this purpose, they proposed the preconditioner . Lv and Ma in A12 proposed a Levenberg-Marquardt (LM) method for solving tensor equations with semi-symmetric coefficient tensor and proved its global convergence and local quadratic convergence under the local error bound condition, which is weaker than non-singularity. As an application, they solved the H-eigenvalue of real semi-symmetric tensor by the LM method. Wang et al., in A13

proposed continuous-time neural network and modified continuous-time neural networks for solving a multi-linear system with

-tensors. They proved that the presented neural networks are stable in the sense of Lyapunov stability theory. For solving the multilinear system , where is a symmetric -tensor, Xie et al. in A14 proposed some tensor methods based on the rank-1 approximation of the coefﬁcient tensor. Li et al. in A15 , considered tensor equations of 3 order whose solutions are the intersection of a group of quadrics from a geometric point of view. Inspired by the method of alternating projections for set intersection problems, they developed a hybrid alternating projection algorithm for solving these tensor equations. The local linear convergence of the alternating projection method was established under suitable conditions. Liu et al. in A16 , presented a preconditioned SOR method for solving the multilinear systems whose coefficient tensor is an -tensor. Also, the corresponding comparison for spectral radii of iterative tensors was given. It is known that the preconditioning technique plays an important role in solving multilinear systems. In particular, when the coefficient tensor is an -tensor, there is little research on these techniques so far. By this motivation, we establish some effective preconditioners and give a theoretical analysis.
The rest of this paper is organized as follows. Section 2 is preliminary in which we introduce some related definitions and lemmas. In Section 3, new fast and flexible type preconditioners are proposed, and the corresponding theoretical analysis is given. In Section 4, numerical examples are given to show the efficiency of the proposed preconditioned iterative methods. Section 5 is the concluding remark and the final section is the future researches.

2 Preliminaries

In this section, we introduce some definitions, notations, and related properties which will be used in the following.
Let 0, and denote for null vector, null matrix and null tensor, respectively. Let and be a tensor (vector or matrix) with the same sign. The order means that each element of is no less than (larger than) corresponding one of .
A tensor consists of elements in the complex field :

 A=(ai1i2…im),  ai1i2…im∈C,  1≤ij≤nj,j=1,...,m.

When , is an matrix. If , is called an order -dimensional tensor. By we denote all order tensors consist of entries and by we denote the set of all order -dimensional tensors. When , is simplified as , which is the set of all -dimension complex vectors. Similarly, the above notions can be used to the real number field .
Let . If each entry of is nonnegative, then is called a nonnegative tensor. The set of all order -dimensional nonnegative tensors is denoted by . The order -dimensional identity tensor, denoted by , is the tensor with entries:

 δi1i2...im={1,  i1=i2=...=im0,  otherwise.

When

, the identity tensor reduces to identity matrix of size

, denoted by .

Definition 1

A20 is called a reducible tensor if there exists a nonempty proper index subset such that

 ai1i2...im=0, ∀i1∈I, ∀i2...im∉I,

else, we say that is irreducible.

Definition 2

A17 A tensor is called a -tensor if its oﬀ-diagonal entries are non-positive. is an -tensor if there exists a nonnegative tensor and a positive real number such that . If , then is called a strong -tensor.

Definition 3

A8 Let ( is an -dimensional square matrix) and . Then a product is defined by

 cji2...im=n∑j2=1ajj2bj2i2...im, (3)

which can be written as follows

 C(1)=(AB)(1)=AB(1),

where and are the matrices obtained from and flattened along the first index, respectively.

Definition 4

A18 Let . The majorization matrix of , denoted by , is defined as a square matrix of size with its entries

 M(A)ij=aij...j,  i,j=1,2,...,n.

If is a nonsingular matrix and , then is the order 2 left-inverse of , i.e., , and then we call a left-invertible tensor or left-nonsingular tensor.

Definition 5

A4 Let . A pair

is called an eigenvalue-eigenvector(or simply eigenpair) of

if they satisfy the equation

 Axm−1=λx[m−1], (4)

where . We call an H-eigenpair if both and x are real.

Let be the spectral radius of , where is the set of all eigenvalues of .

Lemma 0

A8 If is a strong -tensor, then is a nonsingular M-matrix.

Lemma 0

A18 If is an irreducible matrix, then is irreducible.

Definition 6

A8 Let . is said to be a splitting of if is a left-nonsingular; a regular splitting of if is left-nonsingular with and ; a weak regular splitting of if is left-nonsingular with and ; a convergent splitting if .

Lemma 0

A22 If is a -tensor, then the following conditions are equivalent

1. is a strong -tensor.

2. has a convergent (weak) regular splitting.

3. All (weak) regular splittings of are convergent.

4. There exist a vector such that .

Lemma 0

A3 If is a strong -tensor, then for every positive vector b, the multilinear system has a unique positive solution.

Lemma 0

A20 Suppose that . Let be a weak regular splitting and a regular splitting, respectively, and . One of the following statements holds.

1. .

2. .

If and , the first inequality in part is strict.

Lemma 0

A22 Let be a strong -tensor, and be two weak regular splitting with . If the Perron vector x of satisfies then .

A general tensor splitting iterative method for solving (1) is

 xj+1=[M(E)−1Fxm−1j+M(E)−1b][1m−1], j=0,1,.... (5)

is called the iterative tensor of the splitting method (5). Taking , Liu et al. in A8 , considered and , the Jacobian, the Gauss-Seidel, and the SOR iterative methods, respectively, where and . are the positive diagonal matrix and the strictly lower triangle nonnegative matrix, respectively. Without loss of generality, we always assume that . Consider the splitting of , where and is the strictly lower triangle part of .
Using iterative methods for solving (1) may have a poor convergence or even fail to converge. To overcome this problem, it is efficient to apply these methods which combine preconditioning techniques. These iterative methods usually involve some matrices that transform the iterative tensor into a favorable tensor. The transformation matrices are called preconditioners. Li et al. in A10 , considered the preconditioner for solving preconditioned multilinear system

 PαAxm−1=Pαb,

with

 Sα=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0−α1a12...20...000−α2a23...3...0⋮⋮⋮⋱⋮000...αn−1an−1,n...n00000⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

firstly proposed for -matrix systems and the authors extended the results for solving tensor case. In A16 , Liu et al. considered a new preconditioned SOR method for solving multilinear systems with preconditioner where

 Cβ=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣000...0−β1a21...100...0⋮⋮⋮⋱⋮−βn−2a(n−1)1...100...0−βn−1an1...10000⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

Here we consider the preconditioner , where , is the diagonal part of majorization of (so herein ) and , are square matrices which all of their elements are zeros except the th upper and the th lower diagonals, i.e.,

 Ssα=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0...0−α1a1(1+s)...(1+s)0...00...00−α2a2(2+s)...(2+s)...0⋮⋮⋮⋮⋮⋱00...000...αn−san−s,n...n0...000...0⋮⋮⋮⋮⋮⋮⋮0...000...0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,
 Kkβ=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣00...0...0⋮⋮⋮⋮⋮⋮00...0...0−βk+1a(k+1)1...10...0...00−βk+2a(k+2)2...2...0...0⋮⋮⋱⋮⋮⋮00−βnan(n−k+1)...(n−k+1)0...0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

Applying on the left side of Eq. (1), we get a new preconditioned multi-linear system

 Aαβ(s,k)xm−1=bαβ(s,k), (6)

with and

Proposition 7

Let be a -tensor. If is a strong -tensor for any and , then is a strong -tensor.

Proof. Without loss of generality, we assume that . Let . Then for , we have

 ^aji2…im=⎧⎪ ⎪⎨⎪ ⎪⎩a1i2…im−α1a12...2a2i2…im, j=1aji2…im−βjaj(j−1)...(j−1)a(j−1)i2…im−αjaj(j+1)...(j+1)a(j+1)i2…im,2≤j≤n−1ani2…im−βnan(n−1)...(n−1)a(n−1)i2…im, j=n.

For and , we have , i.e., is a -tensor. According to Lemma 3, there exist a vector such that . It follows from that . Thus there exists a vector such that . Therefore, is a strong -tensor.

Proposition 8

The preconditioned multi-linear system (6) has the same unique positive solution with multi-linear system (1).

Proof. Because for any and , by Lemma 4 and Proposition 7, it is obvious.

3 The preconditioned Jacobi, Gauss–Seidel and SOR type iteration schemes

3.1 The preconditioned Jacobi type iterative scheme with the preconditioner Pαβ(s,k)

Let . We consider the following five Jacobi type splittings:

Remark 0

The splitting , where , is the same as the splitting in A10 .

Remark 0

When , we denote by and by . Thus we have the following Jacobi type splitting:
Denote by and by when all and .
Let and , then

Proposition 11

Let be a strong -tensor for any and , then , and are convergent. Moreover if

 ⎧⎪ ⎪⎨⎪ ⎪⎩0<α1a12...2a21...1<1,0<αiai(i+1)...(i+1)a(i+1)i...i+βiai(i−1)...(i−1)a(i−1)i...i<1, i=2,...,n−1,0<βnan(n−1)...(n−1)a(n−1)n...n<1, (7)

then the tensor splitting is convergent.

Proof. Suppose Since is a strong -tensor, . Thus . Hence is a convergent splitting.
Let . We have and since , it is easy to see that . Thus is a regular splitting. By Proposition 7, is a strong -tensor and using Lemma 3, is a convergent regular splitting.
When and , proof is similar to the proof of the case .
Suppose that , and Eq. (7), holds. Thus exists, and

 M(E5)−1ii=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩11−α1a12...2a21...1,11−αiai(i+1)...(i+1)a(i+1)i...i−βiai(i−1)...(i−1)a(i−1)i...i, i=2,...,n−1,11−βnan(n−1)...(n−1)a(n−1)n...n, (8)

which implies that . It is not difficult to see that . Using Proposition 7, is a strong -tensor and from Lemma 3, is a convergent regular splitting.

Proposition 12

Let be a strong -tensor and Eq. (7) holds. There exist , such that

1. .

2. .

Proof.
1.  Since is a strong -tensor, . Thus, for the nonnegative Jacobi iteration tensor , there exists a nonnegative vector such that by the Perron–Frobenius theorem. Thus we have

Thus

due to and .
2.  By Proposition 11, we know that is convergent, i.e., and thus, for the nonnegative Jacobi iteration tensor , there exists a nonnegative vector such that by the Perron–Frobenius theorem. Thus we have

3. Since and , thus .
Let be a Perron eigenpair of , then by part 2, we have and by Lemma 6, we have . Now suppose that x is a nonnegative Perron vector of , then by part 1, we have