In this paper, we are interested in approximating the solution of following tensor equation
where and are known and is an unknown tensor to be determined. We can also consider the least-squares problem
In the current paper, we are interested in developing robust and fast iterative Krylov subspace methods via Einstein product to solve regularized problems originating from color image and video processing applications. Standard and global Krylov subspace methods are suitable when dealing with grayscale images, e.g, [1, 2, 7, 8], while Krylov subspace methods can handle similar applications when the blurring linear operator can be decomposed in Kroncker product of two matrices; see [1, 2]. However, much work has to be done to numerically solve problems related to multi channel images (e.g. color images, hyper-spectral images and videos). We show that modelling these problems in the form of tensor equation (1) make it possible to develop iterative Krylov subspace methods more appealing and allows to significantly reduce the overall computational complexity.
The remainder of paper is organized as follows: We shall first in Section 2 by presenting some symbols and notations used throughout paper. Section 3 includes reviewing the adaptation of Tikhonov regularization for tensor equation (1). Then we propose GMRES and Global Golub–Kahan methods via Einstein in conjunction with Tikhonov regularization. On the basis of Point Spread Function (PSF), in Section 4, we propose a tensor formulation in the form of (1) that describes the blurring of color image and video processing. Numerical examples are reported on restoring blurred and noisy color images and videos. Concluding remarks can be found in Section 5.
2 Definitions and Notations
In this section, we briefly review some concepts and notions that are used throughout the paper. A tensor
is a multidimensional array of data and a natural extension of scalars, vectors and matrices to a higher order, a scalar is aorder tensor, a vector is a order tensor and a matrix is order tensor. The tensor order is the number of its indices, which is called modes or ways. For a given N-mode tensor , the notation (with ) stands for element of the tensor . Corresponding to a given tensor , the notation
denotes a tensor in which is obtained by fixing the last index and is called frontal slice; see [20, 21] for more details. Throughout this work, vectors and matrices are respectively denoted by lowercase and capital letters, and tensors of higher order are represented by calligraphic letters.
We first recall the definition of -mode tensor product with a matrix; see  .
The -mode product of the tensor and the matrix is denoted by is a tensor of order and its entries are defined by
The -mode product of the tensor with the vector is an -mode tensor denoted by whose elements are given by
Next, we recall the definition and some properties of the tensor Einstein product which is an extension of the matrix product; for more details see 
Let , , the Einstein product of tensors and is a tensor of size whose elements are defined by
Given a tensor , the tensor the transpose of , if . We denote the transpose of by .
A tensor is said to be diagonal if all of its entries are equal to zero except for . In the case , the tensor is called diagonal and denoted by . We further use the notation for a the tensor having all its entries equal to zero.
Let . The tensor is invertible if there exists a tensor such that
The trace of an even-order tensor is given by
The inner product of two same size tensors is defined by
Notice that for even order tensors , we have
where denote de transpose of
The Frobenius norm of the tensor is given by
The two tensors are orthogonal iff
In , the product between -mode tensors and is defined as an matrix whose -th entry is
Basically, the product is the contracted product of -mode tensors and along the first modes.
It is immediate to see that for , we have
We end the current subsection by recalling the following useful proposition from . Suppose that is an -mode tensor with the column tensors and . For an arbitrary -mode tensor with -mode column tensors , the following statement holds
3 Krylov subspace methods via Einstein product
In this section, we recall the tensor global Arnoldi and propose iterative methods based on Global Arnoldi and Global Golub–Kahan bidiagonlization (GGKB) combined with Tikhonov regularization that are applicable to the restoration of a color images and videos from an available blur- and noise-contaminated versions.
3.1 Tikhonov regularization
Many applications require the solution of several ill-conditioning systems of equations of the form (1) with a right hand side contaminated by an additive error,
is the matrix of error terms that may stem from measurement and discretization errors. An ill-posed tensor equation may appear in color image restoration, video restoration, and when solving some partial differential equations in several space dimensions. In order to diminish the effect of the noise in the data, we replace the original problem by a stabilized one. One of the most popular regularization methods is due to Tikhonov. Tikhonov regularization problem to solve (4) is given by
The choice of affects how sensitive is to the error in the contaminated right-hand side. Many techniques for choosing a suitable value of have been analyzed and illustrated in the literature; see, e.g.,  and references therein. In this paper we use the discrepancy principle and the Generalized Cross Validation (GCV) techniques.
3.2 Global GMRES method via Einstein product
Let be a square tensor and . The -th tensor Krylov subspace is defined by
Let be the upper Hessenberg matrix whose entries are the from Algorithm 1 and let be the matrix obtained from by deleting the last row. Then, it is not difficult to verify that the ’s obtained from Algorithm 1 form an orthonormal basis of the tensor Krylov subspace . Analogous to [4, 18], we can prove the following proposition.
Let be the -mode tensor with frontal slices and be the -mode tensor with frontal slices . Then
where with is the
-th column of the identity matrixand is an mode whose frontal slices are all zero except that last one being equal to
Let and . Consider now the linear system of tensor equation
Using Algorithm 1, we can propose the global GMRES method to solve the problem (8). As for the global GMRES, we seek for an approximate solution , starting from such that and by solving the minimization problem
Let steps of Algorithm 1 has been performed. Given an initial guess , we set
Therefore, is determined as follows:
The relations (10) and (11) define the tensor global GMRES (TG-GMRES). Setting and using the relations (9), (10) and (11) it follows that instead of solving the problem (5) we can consider the following low dimensional Tikhonov regularization problem
The solution of the problem (12) is given by
The minimizer of the problem (13) is computed as the solution of the linear system of equations
Notice that the Tikhonov problem (12) is a matrix one with small dimension as is generally small. Hence it can be solved by some techniques such as the GCV method  or the L-curve criterion [14, 15, 7, 8].
An appropriate selection of the regularization parameter is important in Tikhonov regularization. Here we can use the generalized cross-validation (GCV) method [6, 13, 33]. For this method, the regularization parameter is chosen to minimize the GCV function
where and is the solution of (14). As the projected problem we are dealing with is of small size, we cane use the SVD decomposition of to obtain a more simple and computable expression of . Consider the SVD decomposition of . Then the GCV function could be expressed as (see )
where is the
th singular value of the matrixand .
In the practical implementation, it’s more convenient to use a restarted version of the global GMRES. As the number of outer iterations increases, it is possible to compute the -th residual without forming the solution. This is described in the following theorem. At step , the residual produced by the tensor global GMRES method for solving (1) has the following expression
whereand is the last component of the vector in which and is the last column of identity matrix. Furthermore,
At step , the residual can be expressed as
by considering the QR decomposition of the matrix , we get
Straightforward computations show that
where denotes vector obtained by deleting the last component of . Since solves problem (11), it follows that is the solution of , i.e.,
Note that can be written in the following form
Now the result follows immediately from the above computations.
The tensor form of global GMRES algorithm for solving (1) is summarized as follows:
3.3 Golub–Kahan method via Einstein
Instead of finding orthonormal basis for the Krylov subspace and using GMRES method, one can apply oblique projection schemes based on biorthogonal bases for and ; see  for instance.
Here, we exploit the tensor Golub–Kahan algorithm via the Einstein product. It should be commented here that the Golub–Kahan algorithm has been already examined for solving ill-posed Sylvester and Lyapunov tensor equations with applications to color image restoration .
Let tensors , and be given. Then, the global Golub–Kahan bidiagonalization (GGKB) algorithm is summarized in Algorithm 3.
Assume that steps of the GGKB process have been performed, we form the lower bidiagonal matrix
Assume that have performed and all non-trivial entries of the matrix are positive. Let and be -mode tensors whose frontal slices are given by and for , respectively. Furthermore, suppose that and are -mode tensors having frontal slices and for , respectively. The following relations hold:
From Lines 7 and 10 of Algorithm 3, we have
Here, we apply the following Tikhonov regularization approach and solve the new problem
We comment on the use of in (20) instead of below. As for the iterative tensor Global GMRES method discussed in the previous subsection, the computation of an accurate approximation requires that a suitable value of the regularization parameter be used. In this subsection, we use the discrepancy principle to determine a suitable regularization parameter assuming that an approximation of the norm of additive error is available, i.e., we have a bound for . This priori information suggests that has to be determined such that,
where is the safety factor for the discrepancy principle. A zero-finding method can be used to solve (21) in order to find a suitable regularization parameter which also implies that has to be evaluated for several -values. When the tensor is of moderate size, the quantity can be easily evaluated. This computation becomes expensive when is a large tensor, which means that its evaluation by a zero-finding method can be very difficult and computationally expensive. In what follows, it is shown that this difficulty can be remedied by using a connection between the Golub–Kahan bidiagonalization (GGKB) and Gauss-type quadrature rules. This connection provides approximations of moderate sizes to the quantity and therefore gives a solution method to inexpensively solve (21) by evaluating these small quantities; see [1, 2] for discussion on this method.
Let us consider the following functions of ,
and are pairs of Gauss and Gauss-Radau quadrature rules, respectively, and they approximate as follows
If (27) does not hold for , we carry out one more GGKB steps, replacing by and solve the nonlinear equation
It is also computed by solving the least-squares problem
The following result shows an important property of the approximate solution (29). We include a proof for completeness.