Tensor GMRES and Golub-Kahan Bidiagonalization methods via the Einstein product with applications to image and video processing

05/15/2020 ∙ by M. El Guide, et al. ∙ ULCO 0

In the present paper, we are interested in developing iterative Krylov subspace methods in tensor structure to solve a class of multilinear systems via Einstein product. In particular, we develop global variants of the GMRES and Gloub–Kahan bidiagonalization processes in tensor framework. We further consider the case that mentioned equation may be possibly corresponds to a discrete ill-posed problem. Applications arising from color image and video restoration are included.



There are no comments yet.


page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we are interested in approximating the solution of following tensor equation


where and are known and is an unknown tensor to be determined. We can also consider the least-squares problem

Tensor equations arise in many application of modern sciences, e.g., engineering [28], signal processing [24], data mining [26], tensor complementarity problems[27]

, computer vision

[31, 32] and as a result have been extensively studied in the literature [9, 29, 22]. The most recent tensor approaches used for numerically solving PDEs have been investigated in [10]. For those applications, we have to take advantage of this multidimensional structure to build rapid and robust methods for solving the related problems. For an extensive literature on tensors one can see for example the good papers in [20, 21]. Over the years many specialized methods for solving tensor problems of type (1) have been developed, see e.g. [17] for tensor forms of the Arnoldi and Lanczos processes for well-posed problems. Huang et al. [17] pointed out that tensor equations of the form (1) appear in continuum physics, engineering, isotropic and anisotropic elastic models. Multilinear systems of the form (1) may also arise from discretization of the high-dimensional Poisson problem using finite difference approximations [3, 17].

In the current paper, we are interested in developing robust and fast iterative Krylov subspace methods via Einstein product to solve regularized problems originating from color image and video processing applications. Standard and global Krylov subspace methods are suitable when dealing with grayscale images, e.g, [1, 2, 7, 8], while Krylov subspace methods can handle similar applications when the blurring linear operator can be decomposed in Kroncker product of two matrices; see [1, 2]. However, much work has to be done to numerically solve problems related to multi channel images (e.g. color images, hyper-spectral images and videos). We show that modelling these problems in the form of tensor equation (1) make it possible to develop iterative Krylov subspace methods more appealing and allows to significantly reduce the overall computational complexity.

The remainder of paper is organized as follows: We shall first in Section 2 by presenting some symbols and notations used throughout paper. Section 3 includes reviewing the adaptation of Tikhonov regularization for tensor equation (1). Then we propose GMRES and Global Golub–Kahan methods via Einstein in conjunction with Tikhonov regularization. On the basis of Point Spread Function (PSF), in Section 4, we propose a tensor formulation in the form of (1) that describes the blurring of color image and video processing. Numerical examples are reported on restoring blurred and noisy color images and videos. Concluding remarks can be found in Section 5.

2 Definitions and Notations

In this section, we briefly review some concepts and notions that are used throughout the paper. A tensor

is a multidimensional array of data and a natural extension of scalars, vectors and matrices to a higher order, a scalar is a

order tensor, a vector is a order tensor and a matrix is order tensor. The tensor order is the number of its indices, which is called modes or ways. For a given N-mode tensor , the notation (with ) stands for element of the tensor . Corresponding to a given tensor , the notation

denotes a tensor in which is obtained by fixing the last index and is called frontal slice; see [20, 21] for more details. Throughout this work, vectors and matrices are respectively denoted by lowercase and capital letters, and tensors of higher order are represented by calligraphic letters.

We first recall the definition of -mode tensor product with a matrix; see [21] .

The -mode product of the tensor and the matrix is denoted by is a tensor of order and its entries are defined by

The -mode product of the tensor with the vector is an -mode tensor denoted by whose elements are given by

Next, we recall the definition and some properties of the tensor Einstein product which is an extension of the matrix product; for more details see [3]


Let , , the Einstein product of tensors and is a tensor of size whose elements are defined by

Given a tensor , the tensor the transpose of , if . We denote the transpose of by .

A tensor is said to be diagonal if all of its entries are equal to zero except for . In the case , the tensor is called diagonal and denoted by . We further use the notation for a the tensor having all its entries equal to zero.

Let . The tensor is invertible if there exists a tensor such that

The trace of an even-order tensor is given by

The inner product of two same size tensors is defined by

Notice that for even order tensors , we have

where denote de transpose of
The Frobenius norm of the tensor is given by


The two tensors are orthogonal iff .

In [4], the product between -mode tensors and is defined as an matrix whose -th entry is


Basically, the product is the contracted product of -mode tensors and along the first modes.
It is immediate to see that for , we have


for .
We end the current subsection by recalling the following useful proposition from [4]. Suppose that is an -mode tensor with the column tensors and . For an arbitrary -mode tensor with -mode column tensors , the following statement holds


3 Krylov subspace methods via Einstein product

In this section, we recall the tensor global Arnoldi and propose iterative methods based on Global Arnoldi and Global Golub–Kahan bidiagonlization (GGKB) combined with Tikhonov regularization that are applicable to the restoration of a color images and videos from an available blur- and noise-contaminated versions.

3.1 Tikhonov regularization

Many applications require the solution of several ill-conditioning systems of equations of the form (1) with a right hand side contaminated by an additive error,



is the matrix of error terms that may stem from measurement and discretization errors. An ill-posed tensor equation may appear in color image restoration, video restoration, and when solving some partial differential equations in several space dimensions. In order to diminish the effect of the noise in the data, we replace the original problem by a stabilized one. One of the most popular regularization methods is due to Tikhonov

[30]. Tikhonov regularization problem to solve (4) is given by


The choice of affects how sensitive is to the error in the contaminated right-hand side. Many techniques for choosing a suitable value of have been analyzed and illustrated in the literature; see, e.g., [33] and references therein. In this paper we use the discrepancy principle and the Generalized Cross Validation (GCV) techniques.

3.2 Global GMRES method via Einstein product

Let be a square tensor and . The -th tensor Krylov subspace is defined by


where . The global Arnoldi process for matrix case was proposed in [18]. The algorithm for constructing orthonormal basis of (6) can be given as follows: (see [4, 17, 18])

  1. Inputs: A tensor , and a tensor and the integer .

  2. Set and .

  3. For

  4. for .

    • ,

  5. endfor

  6. . If , stop; else

  7. .

  8. EndFor

Algorithm 1 Global Arnoldi process via Einstein product

Let be the upper Hessenberg matrix whose entries are the from Algorithm 1 and let be the matrix obtained from by deleting the last row. Then, it is not difficult to verify that the ’s obtained from Algorithm 1 form an orthonormal basis of the tensor Krylov subspace . Analogous to [4, 18], we can prove the following proposition.

Let be the -mode tensor with frontal slices and be the -mode tensor with frontal slices . Then

where with is the

-th column of the identity matrix

and is an mode whose frontal slices are all zero except that last one being equal to

Let and . Consider now the linear system of tensor equation


Using Algorithm 1, we can propose the global GMRES method to solve the problem (8). As for the global GMRES, we seek for an approximate solution , starting from such that and by solving the minimization problem


where .

Let steps of Algorithm 1 has been performed. Given an initial guess , we set


which results . Using the relations (3.2), from Proposition 2 it immediate to observe that

Therefore, is determined as follows:


The relations (10) and (11) define the tensor global GMRES (TG-GMRES). Setting and using the relations (9), (10) and (11) it follows that instead of solving the problem (5) we can consider the following low dimensional Tikhonov regularization problem


The solution of the problem (12) is given by


The minimizer of the problem (13) is computed as the solution of the linear system of equations


where .

Notice that the Tikhonov problem (12) is a matrix one with small dimension as is generally small. Hence it can be solved by some techniques such as the GCV method [13] or the L-curve criterion [14, 15, 7, 8].

An appropriate selection of the regularization parameter is important in Tikhonov regularization. Here we can use the generalized cross-validation (GCV) method [6, 13, 33]. For this method, the regularization parameter is chosen to minimize the GCV function

where and is the solution of (14). As the projected problem we are dealing with is of small size, we cane use the SVD decomposition of to obtain a more simple and computable expression of . Consider the SVD decomposition of . Then the GCV function could be expressed as (see [33])


where is the

th singular value of the matrix

and .

In the practical implementation, it’s more convenient to use a restarted version of the global GMRES. As the number of outer iterations increases, it is possible to compute the -th residual without forming the solution. This is described in the following theorem. At step , the residual produced by the tensor global GMRES method for solving (1) has the following expression



is the unitary matrix obtained by QR decomposition of the upper Hessenberg matrix

and is the last component of the vector in which and is the last column of identity matrix. Furthermore,


At step , the residual can be expressed as

by considering the QR decomposition of the matrix , we get

Straightforward computations show that

where denotes vector obtained by deleting the last component of . Since solves problem (11), it follows that is the solution of , i.e.,

Note that can be written in the following form

Now the result follows immediately from the above computations.

The tensor form of global GMRES algorithm for solving (1) is summarized as follows:

  1. Inputs The tensors , , initial guess , a tolerance , number of iterations between restarts and Maxit: maximum number of outer iterations.

  2. Compute , set and

  3. Determine the orthonormal frontal slices of , and the upper Hessenberg matrix by applying Algorithm 1 to the pair .

  4. Determine as the parameter minimizing the GCV function given by (15)

  5. Determine as the solution of low-dimensional Tikhonov regularization problem (12) and set

  6. If or ; Stop
    else: set , Goto 2

Algorithm 2 Global GMRES method via Einstein product for Tikhonov regularization

3.3 Golub–Kahan method via Einstein

Instead of finding orthonormal basis for the Krylov subspace and using GMRES method, one can apply oblique projection schemes based on biorthogonal bases for and ; see [19] for instance.

Here, we exploit the tensor Golub–Kahan algorithm via the Einstein product. It should be commented here that the Golub–Kahan algorithm has been already examined for solving ill-posed Sylvester and Lyapunov tensor equations with applications to color image restoration [5].
Let tensors , and be given. Then, the global Golub–Kahan bidiagonalization (GGKB) algorithm is summarized in Algorithm 3.

  1. Inputs The tensors , , and an integer .

  2. Set , and

  3. For Do

  4. if stop, else

  5. if stop, else

  6. EndDo

Algorithm 3 Global Golub–Kahan algorithm via Einstein product

Assume that steps of the GGKB process have been performed, we form the lower bidiagonal matrix


Assume that have performed and all non-trivial entries of the matrix are positive. Let and be -mode tensors whose frontal slices are given by and for , respectively. Furthermore, suppose that and are -mode tensors having frontal slices and for , respectively. The following relations hold:


From Lines 7 and 10 of Algorithm 3, we have

which conclude (18) from definition of -mode product. Similarly, Eq. (19) follows from Lines 4 and 6 of Algorithm 3.

Here, we apply the following Tikhonov regularization approach and solve the new problem


We comment on the use of in (20) instead of below. As for the iterative tensor Global GMRES method discussed in the previous subsection, the computation of an accurate approximation requires that a suitable value of the regularization parameter be used. In this subsection, we use the discrepancy principle to determine a suitable regularization parameter assuming that an approximation of the norm of additive error is available, i.e., we have a bound for . This priori information suggests that has to be determined such that,


where is the safety factor for the discrepancy principle. A zero-finding method can be used to solve (21) in order to find a suitable regularization parameter which also implies that has to be evaluated for several -values. When the tensor is of moderate size, the quantity can be easily evaluated. This computation becomes expensive when is a large tensor, which means that its evaluation by a zero-finding method can be very difficult and computationally expensive. In what follows, it is shown that this difficulty can be remedied by using a connection between the Golub–Kahan bidiagonalization (GGKB) and Gauss-type quadrature rules. This connection provides approximations of moderate sizes to the quantity and therefore gives a solution method to inexpensively solve (21) by evaluating these small quantities; see [1, 2] for discussion on this method.
Let us consider the following functions of ,


and are pairs of Gauss and Gauss-Radau quadrature rules, respectively, and they approximate as follows


As shown in [1, 2], for a given value of , we solve for the nonlinear equation


by using Newton’s method.
The use the parameter in (20) instead of implies that the left-hand side of (21) is a decreasing convex function of Therefore, there is a unique solution, denoted by of

for almost all values of of practical interest and therefore also of (26) for sufficiently large; see [1, 2] for analyses. We accept that solve (21) as an approximation of , whenever we have


If (27) does not hold for , we carry out one more GGKB steps, replacing by and solve the nonlinear equation


see [1, 2] for more details. Assume now that (27) holds for some . The corresponding regularized solution is then computed by


where solves


It is also computed by solving the least-squares problem


The following result shows an important property of the approximate solution (29). We include a proof for completeness.

Under assumptions of Proposition 3.3, let solve (26) and let solve (31). Then the associated approximate solution (29) of (20) satisfies

By Eq. 18, we have