Guarantees of Augmented Trace Norm Models in Tensor Recovery

07/23/2012 ∙ by Ziqiang Shi, et al. ∙ Harbin Institute of Technology 0

This paper studies the recovery guarantees of the models of minimizing X_*+1/2αX_F^2 where X is a tensor and X_* and X_F are the trace and Frobenius norm of respectively. We show that they can efficiently recover low-rank tensors. In particular, they enjoy exact guarantees similar to those known for minimizing X_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, or spherical section property. To recover a low-rank tensor X^0, minimizing X_*+1/2αX_F^2 returns the same solution as minimizing X_* almost whenever α≥10_iX^0_(i)_2.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Low-rank tensor recover problem is the generalization of sparse vector recovery and low-rank matrix recover to tensor data 

[1, 2, 3]

. It has drawn lots of attention from researchers in different fields in the past several years. They have wide applications in data-mining, computer vision, signal/image processing, machine learning, etc.. The fundamental problem of low-rank tensor recovery is to find a tensor of (nearly) lowest rank from an underdetermined

, where is a linear operator and is a -way tensor.

To recover a low-rank tensor from linear measurements , a powerful approach is the convex model [1, 3]

(1)

where

(2)

and is the mode- unfolding of , is the trace norm of the matrix

, i.e. the sum of the singular values of

. For vector with noise or generated by an approximately low-rank tensor, a variant of (1) is [3]

(3)

Despite empirical success, the recovery guarantees of tensor recovery algorithms has not been fully elucidated. Recently, several authors [4, 5, 6] have got excellent results in the guarantees of sparse vector recovery and low-rank matrix recover. In this paper, we try to generalize these results to low-rank tensor recovery. To the best of our knowledge, this is the first paper that studies the guarantees of low-rank tensor recovery algorithm.

This paper mainly studies the guarantees of minimization of the augmented objective . The augmented model for (1) and (3) are

(4)

and

(5)

respectively. These are natural generalizations of the augmented model for vector and matrix data [4] to tensor case.

Ii Notations

We adopt the nomenclature mainly used by Kolda and Bader on tensor decompositions and applications [7], and also a few symbols of [8, 9].

The order of a tensor is the number of dimensions, also known as ways or modes. Matrices (tensor of order two) are denoted by upper case letters, e.g. , and lower case letters for the elements, e.g. . Higher-order tensors (order three or higher) are denoted by Euler script letters, e.g. , and element of a -order tensor is denoted by . Fibers are the higher-order analogue of matrix rows and columns. A fiber is defined by fixing every index but one. The mode- fibers are all vectors that obtained by fixing the values of . The mode- unfolding, also knows as matricization, of a tensor is denoted by and arranges the model- fibers to be the columns of the resulting matrix. The unfolding operator is denoted as . The opposite operation is , denotes the refolding of the matrix into a tensor. The tensor element is mapped to the matrix element , where

Therefore, . The -rank of a -dimensional tensor , denoted as is the column rank of , i.e. the dimension of the vector space spanned by the mode- fibers. We say a tensor is rank when for , and denoted as . We introduce an ordering among tensors by the rank:. The inner product of two same-size tensors is defined as , where vec is a vectorization. The corresponding norm is , which is often called the Frobenius norm.

The -th mode product of a tensor with a matrix is denoted by and is of size . Elementwise, we have

(6)

Every tensor can be written as the product [8]

(7)

in which:

  • is a unitary matrix,

  • is a -tensor of which the subtensors , obtained by fixing the -th index to , have the properties of:

    1) all-orthogonality: two subtensors and are orthogonal for all possible values of , and subject to : ,

    2) ordering: for all possible values of , one has .

    The Frobenius norms , symbolized by , are mode- singular values of , that means the singular values of .

This is called the higher-order singular value decomposition (HOSVD) of a tensor

in [8]. Some properties of this HOSVD which will be used in this paper are list below as lemmas:

Lemma 1

([8] Property 6). Let the HOSVD of be given as in (7), and let be equal to the highest index for which ; then one has .

Lemma 2

[8] Property 8). Let the HOSVD of be given as in (7); due to the unitarily invariant of the Frobenius norm, one has .

Iii Motivations and contributions

To explain why model (4) is interesting, we conducted following tensor completion simulations

(8)

to compare it with model (1) based tensor completion

(9)

The facade image data of [3] was used here to be an example. Models (8) and (9) were solved to high accuracy by the solver LRTC [3]. For each model, we measured and recorded

(10)

The relative errors are depicted as functions of the number of iterations in Figure 1(a).

(a) Recovery relative errors
(b) Original image
(c) Image recovered via (1)
(d) Image recovered via (4) with
(e) Image recovered via (4) with
(f) Image recovered via (4) with
Fig. 1: Facade in-painting.

Motivated by the above example, we show in this paper that any guarantees that problem (4) either recovers exactly or returns an approximate of it nearly as good as the solution of problem (1). Specifically, we show that several properties of , such as the null-space property (a simple condition used in, e.g., [3, 10, 11, 12, 13]), the restricted isometry principle [14], and the spherical section property [15], which have been used in the recovery guarantees for vectors and matrices, can also guarantee the tensor recovery by model (1) and (4).

Even though not known when is set,

is often easy to estimate. When

is not available, using inequalities , one get the more conservative formulae . Furthermore, when satisfies the RIP, one has for some ; hence, one has the option to use the even more conservative formula .

Iv Tensor recovery guarantees

This section establishes recovery guarantees for the original and augmented trace norm models (1) and (4). The results are given based on the properties of including the null-space property (NSP) in Theorem 5 and 6, the restricted isometry principle (RIP) [14] in Theorem 8 and 9, the spherical section property (SSP) [15] in Theorem 10. These results adapt and generalize of the work in [4].

Iv-a Null space property

The wide use of NSP for recovering sparse vector and low-rank matrices can be found in e.g. [10, 11, 12, 13]. In this subsection, we extend the NSP conditions on for tensor recovery. Throughout this subsection, we let denote the -th largest singular value of matrix of rank or less, and denote the diagonal matrix of singular values and . denotes the spectral norms of .

We will need the following two technical lemmas for the introduction of the tensor NSP conditions.

Lemma 3

([8] Theorem 7.4.51). Let and be two matrices of the same size. Then we have

(11)

and

(12)
Lemma 4

([4] Equation (19)). Let and be two vectors of the same size, and . Then we have

(13)

and

(14)

Now we give a NSP type sufficient condition for problem (1).

Theorem 5

(Tensor NSP condition for (1)). Assume is fixed, problem (1) uniquely recovers all tensors of rank or less from the measurements , if all satisfy

(15)

Pick any tensor of rank or less and let . For any , we have . By using (11), we have

(16)

where the first inequality follows from (13). For any nonzero , . Hence from (15) and (16), it follows that is unique minimizer of (1).

We can extend this result to problem (4) as follows.

Theorem 6

(Tensor NSP condition for (4)). Assume is fixed, problem (4) uniquely recovers all tensors of rank or less from the measurements , if all satisfy

(17)

Pick any tensor of rank or less and let . For any nonzero , we have . Thus

(18)

where the first inequality follows from (11) and (12), and the second inequality follows from (13) and (14). For any nonzero , Hence, from (18) and (17), it follows that leads to a strictly worse objective than . That is, is the unique solution to problem (4).

Remark 1

For any finite , (17) is stronger than (15) due to the extra term . Since various uniform recovery results establish conditions that guarantee (15), one can tighten these conditions so that they guarantee (17) and thus the uniform recovery by problem (4). How much tighter these conditions have to be depends on the value .

Iv-B Tensor restricted isometry principle

In this subsection, we generalize the RIP-based guarantees to tensor case and show that any guarantees exact recovery by (4).

Definition 1

(Tensor RIP). Let }. The RIP constant of linear operator is the smallest value such that

(19)

holds for all .

The following recovery theorems will characterize the power of the tensor restricted isometry constants. The first theorem generalizes Lemma 1.3 in [14] and Theorem 3.2 in[5] to low-rank tensor recovery.

Theorem 7

Suppose for some . Then is the only tensor of rank at most satisfying .

Assume, on the contrary, that there exists a tensor with rank or less satisfying and . Then is a nonzero tensor of rank at most , and . But then we would have which is a contradiction.

The proof of the preceding theorem is identical to the argument given by Candes and Tao and is an immediate consequence of our definition of the constant . No adjustment is necessary in the transition from sparse vectors and low-rank matrices to low-rank tensors. The key property used is the sub-additivity of the rank. Adapting results in [4, 6], we give the uniform recovery conditions for (1) below.

Theorem 8

(RIP condition for exact recovery by (1)). Let be a tensor with rank or less. Problem (1) exactly recovers from measurements if satisfies the RIP with , for .

For any nonzero , , let be the HOSVD of . From Proposition 3.7 of [16] we have , thus we have . We decompose , where

(20)

and is the tensor obtained by fixing the n-th index to the index set , others to zero. Similarly we have . Due to the unitarily invariant of the Frobenius norm, we have , , …. Let and , , ,…, where is the -th largest mode- singular value. From the definition of HOSVD and Lemma 2, we have

(21)

Due to the mean-inequation, one has . Assume that with some . Then we have .

First from Lemma 2.1, Lemma 2.2 in [6] and (20) we have

(22)

and

(23)

From (23) we have

(24)

So we have

(25)

and

(26)

Further more, by (22),(24) we have

(27)

Since , we have , by the above equations we have

(28)

Hence, let

(29)

We have a quadratic polynomial of with in the right-hand side of the above inequality. Hence, by calculus, this quadratic polynomial achieves its maximal value at . Therefore we obtain , where

(30)

, then , we get , which is

(31)

If for all , we have (31), then we get (15).

Next we carry out a similar study for the augmented model (4).

Theorem 9

(RIP condition for exact recovery by (4)). Let be a tensor with rank or less. The augmented model (3) exactly recovers from measurements if satisfies the RIP with and .

The proof of Theorem 8 establishes that any nonzero satisfies . Hence, if , notice , we have

(32)

For , we obtain , which proves the theorem.

Remark 2

Different values of