Stochastic Iterative Hard Thresholding for Low-Tucker-Rank Tensor Recovery

09/23/2019 ∙ by Rachel Grotheer, et al. ∙ 0

Low-rank tensor recovery problems have been widely studied in many applications of signal processing and machine learning. Tucker decomposition is known as one of the most popular decompositions in the tensor framework. In recent years, researchers have developed many state-of-the-art algorithms to address the problem of low-Tucker-rank tensor recovery. Motivated by the favorable properties of the stochastic algorithms, such as stochastic gradient descent and stochastic iterative hard thresholding, we aim to extend the well-known stochastic iterative hard thresholding algorithm to the tensor framework in order to address the problem of recovering a low-Tucker-rank tensor from its linear measurements. We have also developed linear convergence analysis for the proposed method and conducted a series of experiments with both synthetic and real data to illustrate the performance of the proposed method.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Tensors are high-dimensional extensions of vectors and matrices. There are many different kinds of tensor decompositions, among which, the Canonical Polyadic (CP) decomposition and Tucker decomposition are the most popular 

Sokolnikoff (1951); Zhang (2017). In recent years, low-rank tensor recovery problems have gained a great amount of attention in various applications including hyperspectral image restoration Fan et al. (2017), video processing Bengua et al. (2017), signal processing Li et al. (2015, 2017), and simultaneous blind deconvolution and phase retrieval Li et al. (2019)

. Unlike low-rank matrix recovery problems, which often use nuclear norm minimization as a popular heuristic for rank minimization, the computation of the nuclear norm for high order tensors is NP-hard 

Hillar and Lim (2013); Friedland and Lim (2018).

Over the decades, the iterative hard thresholding (IHT) algorithm has been widely used in compressive sensing Blumensath and Davies (2009); Blanchard et al. (2015); Carrillo and Barner (2013) and low-rank matrix recovery Tanner and Wei (2013); Chunikhina et al. (2014); Geng et al. (2015). It has many extensions, such as the stochastic variant proposed in Nguyen et al. (2017), which was further extended to the multiple measurement vector framework in Qin et al. (2017). Inspired by the idea of using the IHT algorithm in low-rank matrix recovery problems, the authors in Rauhut et al. (2017)

extended the IHT algorithm to the tensor framework and proposed the Tensor IHT (TIHT) algorithm as an alternative to the tensor nuclear norm minimization. The authors of 

Li et al. (2019)

then combined this TIHT algorithm with higher-order singular value decomposition (HOSVD), a type of Tucker decomposition, to solve a low-rank tensor recovery problem formulated from a simultaneous blind deconvolution and phase retrieval problem. Another recent work 

de Morais Goulart and Favier (2017) also extends the IHT algorithm to the problem of low-rank tensor recovery based on a low-Tucker-rank approximation technique named sequentially optimal modal projections.

The stochastic versions of gradient descent algorithms and IHT algorithms usually have many favorable properties. For example, these algorithms do not need to compute the full gradient, which makes it possible for them to be utilized in large scale problems where computing the full gradient is very expensive. These properties inspired us to extend the stochastic IHT algorithm to the tensor framework and introduce the Stochastic Tensor IHT (StoTIHT) algorithm to recover a low-Tucker-rank tensor from its linear measurements. In this work, we provide convergence analysis for the proposed StoTIHT algorithm, based on a Tucker decomposition of the tensor, under the assumption that the linear operator used to obtain the measurements satisfies a tensor restricted isometry property (TRIP). Our simulations also indicate that the proposed StoTIHT algorithm converges much faster than the original TIHT algorithm in a large scale setting.

The remainder of this work is organized as follows. In Section 2, we briefly review some fundamental concepts and definitions used in the tensor framework. We formulate our low-rank tensor recovery problem in Section 3 and present the proposed StoTIHT algorithm in Section 4. We then introduce the linear convergence analysis for our proposed StoTIHT algorithm in Section 5 and illustrate its performance with both synthetic and real data in Section 6. Finally, we conclude our work in Section 7.

2 Preliminaries

In this section, we briefly review some fundamental concepts and definitions used in the tensor framework Sokolnikoff (1951); Zhang (2017); Kolda and Bader (2009). We denote a -th order tensor as . Vectors and matrices can be viewed as low-dimensional tensors with and 2, respectively. Denote as the mode- matricization or the -th unfolding of a tensor 111One can refer to Kolda and Bader (2009) for a more detailed definition and some easily understandable examples.. Similar to the matrix case, it is possible to vectorize a tensor, resulting in a column vector containing all of the elements of the tensor. Note that the ordering of the elements is not uniquely defined. In this work, the ordering is consistent. In particular, denote as an operator used to vectorize a matrix or a tensor. For a tensor , we choose the vectorization as . That is, we chose to vectorize the mode-1 matricization of the tensor given by . The inner product of two tensors is then defined as

The induced Frobenius norm is then defined as

The Tucker rank of tensor is then defined as a tuple with . The Tucker decomposition is one of the most popular tensor decompositions and one can find more details in Tucker (1963, 1964). The family of Tucker decompositions include the HOSVD of a tensor , which is given as

(2.1)

Here, and denote the core tensor and the basis, respectively. One can refer to Sokolnikoff (1951); Zhang (2017) for more details about the properties of the core tensor and basis. The product is the mode- (matrix) product of the tensor, that is, the product of a tensor and a matrix along the -th mode of the tensor.

3 Problem Formulation

In this work, we consider the recovery of a rank- tensor from its linear measurements , where is a linear operator used to obtain the measurements. In particular, the -th element of is given as

(3.1)

where is a sensing tensor. We observe that the cost function can be rewritten as

(3.2)

where we decompose the measurement vector into non-overlapping vectors , . Note that we can choose to be an integer and let . We denote as a linear operator with the -th entry of being , . It can be seen that each function is associated with a collection of measurements .

Due to the low-rankness of , a standard approach to recover is to solve the following minimization

(3.3)

where the cost function is defined in (3.2). The above optimization has been heavily studied in existing literature. The authors in Gandy et al. (2011); Mu et al. (2014) relax the above optimization by minimizing the sum of the nuclear norm of the tensor matricizations. However, this kind of relaxation is actually non-optimal Recht et al. (2010). Inspired by the idea of using the IHT algorithm for compressive sensing and low-rank matrix recovery problems, the authors in Rauhut et al. (2017) extend the IHT algorithm Blumensath and Davies (2009) to the tensor framework and propose the TIHT algorithm as an alternative to the tensor nuclear norm minimization. In particular, to recover , the TIHT algorithm consists of the following two steps at iteration :

(3.4)
(3.5)

Here, is the stepsize and is the adjoint operator of . That is for tensor and vector , . computes a rank- approximation of a tensor with HOSVD. Note that the second step (3.5) is not straightforward, and the authors in Rauhut et al. (2017) required that the assumption

(3.6)

held for all with some . is the best rank- approximation of with respect to the Tucker decomposition (given by the HOSVD), namely, . We will also assume such an approximation exists in our theorem.

4 The Proposed StoTIHT algorithm

We observe that the linear measurements can be rewritten as

where is a matrix with the -th row being the vectorized version of , and is the vectorized version of . Then, we can update in (3.4) with

(4.1)

where is the vectorized version of .

As previously stated, the stochastic variant of algorithms does not require computation of the full gradient and, thus can be much more efficient in large scale settings especially when the computation and/or storage of the full gradient is very expensive. Thus, we propose a stochastic variant of the TIHT algorithm (StoTIHT) by replacing (4.1) with

(4.2)

where is an index randomly selected from

with probability

, and denotes the -th block of . This updating step is equivalent to

(4.3)

with as in (3.2). Based on the above analysis, we summarize the algorithm below.

1:Input: , , and .
2:Output: .
3:Initialize:
4:for  do
5:     Randomly select a batch index with probability
6:     Compute the gradient as given in (4.3)
7:     
8:     
9:     If the stopping criteria are met, exit.
10:end for
Algorithm 1 Stochastic Tensor Iterative Hard Thresholding (StoTIHT)

5 Linear Convergence for StoTIHT

We present a linear convergence analysis for the proposed StoTIHT algorithm in this section. We first introduce the tensor restricted isometry property (TRIP) in the following definition.

Definition 5.1.

(TRIP) Rauhut et al. (2017) Let and be the two linear operators defined in Section 3. For a fixed tensor Tucker decomposition and a corresponding Tucker rank , we say and satisfy the TRIP if there exists a tensor restricted isometry constant such that

(5.1)
(5.2)

hold for all tensors of Tucker-rank at most .

Note that (5.2) is stronger than , but we will need (5.2) in the proof of the main theorem. We call the linear operator defined in Section 3 a random Gaussian linear operator if the entries of all of the sensing tensors are random Gaussian variables. It is shown in Rauhut et al. (2017) that the random Gaussian linear operators and satisfy the TRIP with high probability as long as and 222These ’s are not necessarily the same constant, according to Rauhut et al. (2017). with and . Here, is corresponding partition tree of the vertices . See Rauhut et al. (2017) for a more detailed description of the construction of .

Now, we are in the position to state our main results in the following theorem.

Theorem 5.1.

Assume that the operators and used to obtain the linear measurements satisfy the TRIP defined in Definition 5.1. Let be a feasible solution of the optimization problem (3.3). Denote as the initial tensor. We also assume that the Tucker-rank- approximation operator satisfies (3.6) for all with some . Using Algorithm 1, one can guarantee that the expectation of the recovery error is bounded by

at the -th iteration. denotes the set containing all indices randomly selected at and before iteration . Here, and are the contraction coefficient and tolerance parameter, which are defined as

with and being an index selected from with probability . is defined as a subspace of spanned by , and . is then defined as the orthogonal projection onto .

The above theorem indicates that the proposed StoTIHT algorithm is guaranteed to linearly converge to the true solution even when the full gradient computation is not available. Therefore, the proposed StoTIHT algorithm has a significant computational advantage in large-scale settings where the number of measurements is very large and the computation of the full gradient can be very expensive, since it only needs to compute a partial gradient.

We adapt the proofs of StoIHT in Nguyen et al. (2017) and TIHT in Rauhut et al. (2017) to the show the linear convergence of our proposed StoTIHT algorithm. Note that in what follows “rank” denotes Tucker rank. We first present a key lemma that will be used in the proof of Theorem 5.1.

Lemma 5.1.

Denote as the index randomly selected from with probability . For any fixed low-rank tensors and , let be a space that contains the space spanned by and . Assume that the rank of any tensor in is at most . Then, we have

(5.3)
(5.4)

To prove Lemma 5.1, we need the following two lemmas.

Lemma 5.2.

For any two low-rank tensors and , let be a space that contains the space spanned by and . Assume that the rank of any tensor in is at most . Suppose that the two linear operators and satisfy the TRIP defined in Definition 5.1. Then, the functions and defined in (3.2) satisfy

(5.5)
(5.6)

for all low-rank tensors and .

Proof.

Recall that

With some fundamental calculations, we obtain their gradients

It follows from the TRIP (5.1) that

Thus, we finish the proof of (5.5) by setting .

Note that

Without loss of generality, we assume that

Then, it follows from the TRIP (5.2) that

(5.7)

Note that is a convex function with respect to , together with Lemma 4 in Zhou (2018), we then get (5.6) by setting .

Lemma 5.3.

(Co-coercivity) For any two low-rank tensors and , let be a space that contains the space spanned by and . Assume that the rank of any tensor in is at most . Then, we have

(5.8)
Proof.

Exchanging the role of and in (5.7), we get

Adding the above inequality and (5.7) together, we obtain

Define a function

Then, we have

holds for any and with their span belonging to . This implies that we can get a similar inequality as in (5.7), namely

(5.9)

Note that

(5.10)

holds for all . Here, the second equality follows by plugging the expression of , and . Define , it can be seen that since both and belong to . Then, applying (5.9) and (5.10), we have

Plugging the definition of and into the above inequality gives

Finally, we can obtain (5.8) by summing the two inequalities with and exchanged.

Proof of Lemma 5.1. With the above two lemmas, we obtain

Here, the first inequality follows from (5.8) and the definition of . The second equality follows from and . The last inequality follows from (5.5). Then, we complete the proof of (5.3) by applying the well known inequality .

Similar to the proof of (5.3), we also have

where the first inequality follows from (5.5) and (5.6), and the last inequality follows from

Therefore, by applying , we finish the proof of (5.4).

We are now prepared to prove Theorem 5.1.

Proof.

Denote as the best rank- approximation of . Assume that the rank- approximation operator satisfies

for all with some . Note that for the HOSVD, Grasedyck (2010) demonstrated a method to compute the HOSVD which obtains , as well as methods that result in and . We refer the reader to Grasedyck (2010) for a description of these methods. Then, we have

It follows that

Simplifying, we get

Plugging in the updating step (4.3) gives