1 introduction
In recent decades, Tensors or hypermatrices have been applied in many types of research and application areas such as data analysis, psychometrics, chemometrics, image processing, graph theory, Markov chains, hypergraphs, etc.
R30 . Tensor equations (or multilinear systems R5 ) involving the Einstein product have been discussed in A3 , which has many applications in continuum physics, engineering, isotropic and anisotropic elastic models R21 . Wang and Xu presented some iterative methods for solving several kinds of tensor equations in R38 , Huang and Ma, in R15 , proposed the Krylov subspace methods to solve a class of tensor equations. In R19 , Khosravi Dehdezi and Karimi proposed the extended conjugate gradient squared and conjugate residual squared methods for solving the generalized coupled Sylvester tensor equationswhere the matrices ( and ), tensors are known, tensors are unknown and is the mode product. Also they proposed a fast and efficient NewtonShultztype iterative method for computing inverse and MoorePenrose inverse of tensors in R78 .
Very recently years, solving the following multilinear system has become a hot topic because of several applications such as data analysis, engineering and scientific computing A1 ; A2 ; A3 :
(1) 
where is an order dimensional tensor, x and b
are vectors in
. The dimensional vector is defined as A4 :(2) 
and denotes the th component of x.
Many theoretical analyses and algorithms for solving (1) were also studied. Qi in A4 considered an order dimensional supersymmetric tensor and showed that when is even it has exactly eigenvalues, and the number of its Eeigenvalues is strictly less than when . Ding and Wei in A3 proved that a nonsingular equation with a positive righthand side always has a unique positive solution. Also, they applied the equations to some nonlinear differential equations and the inverse iteration for spectral radii of nonnegative tensors. In A6 , Han proposed a homotopy method for ﬁnding the unique positive solution to a multilinear system with a nonsingular tensor and a positive right side vector. Li et al., in A10 extended the Jacobi, Gauss‐Seidel and successive over‐relaxation (SOR) iterative methods to solve the tensor equation , where is an order dimensional symmetric tensor. Under appropriate conditions, they showed that the proposed methods were globally convergent and locally ‐linearly convergent. In A7 , He et al. proved that solving multilinear systems with tensors is equivalent to solving nonlinear systems of equations where the involving functions are Pfunctions. Based on this result, they proposed a Newtontype method to solve multilinear systems with tensors. For a multilinear system with a nonsingular tensor and a positive right side vector, they showed that the sequence generated by the method converges to the unique solution of the multilinear system and the convergence rate is quadratic. For solving the multilinear systems, Liang et al. in A11 , transformed equivalently the tensor equation into a consensus constrained optimization problem, and then proposed an ADMM type method for it. Also, they showed that each limit point of the sequences generated by the method satisfied the KarushKuhnTucker conditions.
Liu et al., in A8 , introduced the variant tensor splittings, and presented some equivalent conditions for a strong tensor based on the tensor splitting. Also, the existence and unique conditions of the solution for multilinear systems were given.
Besides, they proposed some tensor splitting algorithms for solving multilinear systems with coefficient tensor being a strong tensor. As an application, a tensor splitting algorithm for solving the multilinear model of higherorder Markov chains was proposed. Li et al., in A9 firstly derived a necessary and sufficient condition for an tensor equation to have nonnegative solutions. Secondly, developed a monotone iterative method to find a nonnegative solution to an tensor
equation. Under appropriate conditions, they showed that the sequence of iterates generated by the method converges to a nonnegative solution of the tensor equation monotonically and linearly. Bai et al. in A4.5
proposed an algorithm that always preserves the nonnegativity of solutions of the multilinear system under consideration involves a nonsingular tensor and a nonnegative righthand side vector. Also, they proved that the sequence generated by the proposed algorithm is a nonnegative componentwise nonincreasing sequence and converges to a nonnegative
solution of the multilinear system. Cui et al. in A5 intended to solve the multilinear system by the preconditioned iterative method based on tensor splitting. For this purpose, they proposed the preconditioner . Lv and Ma in A12 proposed a LevenbergMarquardt (LM) method for solving tensor equations with semisymmetric coefficient tensor and proved its global convergence and local quadratic convergence under the local error bound condition, which is weaker than nonsingularity. As an application, they solved the Heigenvalue of real semisymmetric tensor by the LM method. Wang et al., in A13
proposed continuoustime neural network and modified continuoustime neural networks for solving a multilinear system with
tensors. They proved that the presented neural networks are stable in the sense of Lyapunov stability theory. For solving the multilinear system , where is a symmetric tensor, Xie et al. in A14 proposed some tensor methods based on the rank1 approximation of the coefﬁcient tensor. Li et al. in A15 , considered tensor equations of 3 order whose solutions are the intersection of a group of quadrics from a geometric point of view. Inspired by the method of alternating projections for set intersection problems, they developed a hybrid alternating projection algorithm for solving these tensor equations. The local linear convergence of the alternating projection method was established under suitable conditions. Liu et al. in A16 , presented a preconditioned SOR method for solving the multilinear systems whose coefficient tensor is an tensor. Also, the corresponding comparison for spectral radii of iterative tensors was given. It is known that the preconditioning technique plays an important role in solving multilinear systems. In particular, when the coefficient tensor is an tensor, there is little research on these techniques so far. By this motivation, we establish some effective preconditioners and give a theoretical analysis.The rest of this paper is organized as follows. Section 2 is preliminary in which we introduce some related definitions and lemmas. In Section 3, new fast and flexible type preconditioners are proposed, and the corresponding theoretical analysis is given. In Section 4, numerical examples are given to show the efficiency of the proposed preconditioned iterative methods. Section 5 is the concluding remark and the final section is the future researches.
2 Preliminaries
In this section, we introduce some definitions, notations, and related properties which will be used in the following.
Let 0, and denote for null vector, null matrix and null tensor, respectively. Let and be a tensor (vector or matrix) with the same sign. The order means that each element of is no less than (larger than) corresponding one of .
A tensor consists of elements in the complex field :
When , is an matrix. If , is called an order dimensional tensor. By we denote all order tensors consist of entries and by we denote the set of all order dimensional tensors. When , is simplified as , which is the set of all dimension complex vectors. Similarly, the above notions can be used to the real number field .
Let . If each entry of is nonnegative, then is called a nonnegative tensor. The set of all order dimensional nonnegative tensors is denoted by . The order dimensional identity tensor, denoted by , is the tensor with entries:
When
, the identity tensor reduces to identity matrix of size
, denoted by .Definition 1
A20 is called a reducible tensor if there exists a nonempty proper index subset such that
else, we say that is irreducible.
Definition 2
A17 A tensor is called a tensor if its oﬀdiagonal entries are nonpositive. is an tensor if there exists a nonnegative tensor and a positive real number such that . If , then is called a strong tensor.
Definition 3
A8 Let ( is an dimensional square matrix) and . Then a product is defined by
(3) 
which can be written as follows
where and are the matrices obtained from and flattened along the first index, respectively.
Definition 4
A18 Let . The majorization matrix of , denoted by , is defined as a square matrix of size with its entries
If is a nonsingular matrix and , then is the order 2 leftinverse of , i.e., , and then we call a leftinvertible tensor or leftnonsingular tensor.
Definition 5
A4 Let . A pair
is called an eigenvalueeigenvector(or simply eigenpair) of
if they satisfy the equation(4) 
where . We call an Heigenpair if both and x are real.
Let be the spectral radius of , where is the set of all eigenvalues of .
Lemma 0
A8 If is a strong tensor, then is a nonsingular Mmatrix.
Lemma 0
A18 If is an irreducible matrix, then is irreducible.
Definition 6
A8 Let . is said to be a splitting of if is a leftnonsingular; a regular splitting of if is leftnonsingular with and ; a weak regular splitting of if is leftnonsingular with and ; a convergent splitting if .
Lemma 0
A22 If is a tensor, then the following conditions are equivalent

is a strong tensor.

has a convergent (weak) regular splitting.

All (weak) regular splittings of are convergent.

There exist a vector such that .
Lemma 0
A3 If is a strong tensor, then for every positive vector b, the multilinear system has a unique positive solution.
Lemma 0
A20 Suppose that . Let be a weak regular splitting and a regular splitting, respectively, and . One of the following statements holds.

.

.
If and , the first inequality in part is strict.
Lemma 0
A22 Let be a strong tensor, and be two weak regular splitting with . If the Perron vector x of satisfies then .
A general tensor splitting iterative method for solving (1) is
(5) 
is called the iterative tensor of the splitting method (5). Taking , Liu et al. in A8 , considered and
, the Jacobian, the GaussSeidel, and the SOR iterative methods, respectively, where and . are the positive diagonal matrix and the strictly lower triangle nonnegative matrix, respectively.
Without loss of generality, we always assume that . Consider the splitting of ,
where and is the strictly lower triangle part of .
Using iterative methods for solving (1) may have a poor convergence or even fail to converge. To overcome this problem, it is efficient to apply these methods which combine preconditioning techniques. These iterative methods usually involve some matrices that transform the iterative tensor into a favorable tensor. The transformation matrices are called preconditioners. Li et al. in A10 , considered the preconditioner for solving preconditioned multilinear system
with
firstly proposed for matrix systems and the authors extended the results for solving tensor case. In A16 , Liu et al. considered a new preconditioned SOR method for solving multilinear systems with preconditioner where
Here we consider the preconditioner , where , is the diagonal part of majorization of (so herein ) and , are square matrices which all of their elements are zeros except the th upper and the th lower diagonals, i.e.,
Applying on the left side of Eq. (1), we get a new preconditioned multilinear system
(6) 
with and
Proposition 7
Let be a tensor. If is a strong tensor for any and , then is a strong tensor.
Proof. Without loss of generality, we assume that . Let . Then for , we have
For and , we have , i.e., is a tensor. According to Lemma 3, there exist a vector such that . It follows from that . Thus there exists a vector such that . Therefore, is a strong tensor.
Proposition 8
3 The preconditioned Jacobi, Gauss–Seidel and SOR type iteration schemes
3.1 The preconditioned Jacobi type iterative scheme with the preconditioner
Let . We consider the following five Jacobi type splittings:
Remark 0
The splitting , where , is the same as the splitting in A10 .
Remark 0
When , we denote by and by . Thus we have the following Jacobi type splitting:
Denote by and by when all and .
Let and , then
Proposition 11
Let be a strong tensor for any and , then , and are convergent. Moreover if
(7) 
then the tensor splitting is convergent.
Proof.
Suppose Since is a strong tensor, . Thus . Hence is a convergent splitting.
Let . We have and since , it is easy to see that . Thus is a regular splitting. By Proposition 7, is a strong tensor and using Lemma 3, is a convergent regular splitting.
When and , proof is similar to the proof of the case .
Suppose that , and Eq. (7), holds. Thus exists, and
(8) 
which implies that . It is not difficult to see that . Using Proposition 7, is a strong tensor and from Lemma 3, is a convergent regular splitting.
Proposition 12
Let be a strong tensor and Eq. (7) holds. There exist , such that


.

.
Proof.
1. Since is a strong tensor, . Thus, for the nonnegative Jacobi iteration tensor , there exists a nonnegative vector such that by the Perron–Frobenius theorem. Thus we have
Thus
due to and .
2. By Proposition 11, we know that is convergent, i.e., and thus, for the nonnegative Jacobi iteration tensor , there exists a nonnegative vector such that by the Perron–Frobenius theorem. Thus we have
3. Since and , thus .
Let be a Perron eigenpair of , then by part 2, we have and by Lemma 6, we have . Now suppose that x is a nonnegative Perron vector of , then by part 1, we have