I Introduction
Completion is a technique of filling missing elements of incomplete data using the values of reference (available) elements and the structural assumptions (priors) of data. We consider a general exact matrix/tensor completion problem as follows:
(1) 
where and are the input and output th order tensors, respectively, a cost function, , is used to evaluate (prior) structural assumptions, with is an index tensor that represents the missing and available elements of as and , respectively. A support set, , is defined as . When the missing and available elements are independent, completion is impossible. However, most realworld data have a few redundant properties that can be used for completion, such as symmetry, repetition, and sparsity. When the cost function is convex and proximable, Problem (1) can be solved by convex optimization methods such as alternating direction method of multipliers (ADMM) [3] and primaldual splitting (hybrid gradient) (PDS/PDHG) method [10]. In this paper, we refer to PDS/PDHG as PDS for simple. For example, Problem (1
) with matrix/tensor nuclearnorm
[6, 7, 4, 24, 25], total variation (TV) [16, 11], and both cost functions [18] have been studied.Next, we consider an ‘inexact’ matrix/tensor completion problem as follows:
(2) 
where is a distance measure between and , and
is a tradeoff parameter between a prior and a distance term. We refer to this as ‘Lagrange form’. When we assume Gaussian distribution for a noise model, Problem (
2) with can be considered for completion and denoising. In similar way, Problem (2) with assumes Laplace distribution for a noise model. When both functions and are convex and proximable, it can be solved by convex optimization. For example, Problem (2) with matrix/tensor nuclearnorm [26, 13, 20], TV [29, 30, 17], and both cost functions [31] have been studied.Here, we consider ‘inequality form’ of Problem (2) as follow:
(3) 
where is a noise threshold parameter. Figure 1 illustrates the Problems (2) and (3) and these solutions for different values of and . In the Lagrange form, the minimal point of convex penalty is the solution for , is the solution for , and the solutions for draw a curve connecting two solution points for and . Generally, the good tradeoff exists on the curve, and it may be a projected point onto the curve from the unknown original tensor . However, optimal value of is still unknown and has to be tuned from wide range . On the other hand, the range of can be more narrow in the inequality form when we assume the noise standard deviation is known. Let us put for Gaussian and for Laplace, then optimal value of may exist in . If directions between the additional noise tensor and the gradient of at are similar, then optimal would near to and would also near to . By contrast, if directions between the additional noise tensor and the gradient of at are very different, optimal would be small and would be far from .
Problems (2) and (3) are convertible with corresponding values of and , however, these corresponding values of and are difficult to know. Figure 2 shows the relationship between and in a tensor completion problem with nuclearnorm and Frobeniusnorm minimization. Both Problems are linked by and which have onetoone correspondence.
The problem here is that the inequality form is not easy to solve directly unlike Lagrange form. In [5], inequality form with is solved by iterative optimization of Lagrange form with to tune optimal tradeoff which corresponds to . This is a critical issue to solve its optimization problem with inequality using convex optimization only once.
In this study, we propose a new optimization algorithm based on PDS [10] for convex optimization problems consisting of proximable functions and noise inequality constraints. For this purpose, we derive that the proximal mappings of noise inequality constraints based on Gaussian and Laplace distributions, which are not trivial, can be obtained using analytical calculation. Furthermore, to accelerate the optimization, we propose a new stepsize adaptation method for PDS algorithm. For application, we define the cost function as a composition of tensor nuclearnorm and generalized TV, and conduct extensive experiments to show the advantages of the proposed methods.
Note that this work is an extension of prior study presented in conferences [36, 35]. The new contributions in this study are as follows: A generalized formulation and detailed explanation of the proposed models, derivation of a new proximal mapping for Laplace noise inequality, applying a new stepsize adaptation method for acceleration, and additional experiments.
The remainder of this paper is organized as follows: In Section II, prior studies on matrix and tensor completion methods are reviewed. In Sections III and IV, we propose a new model for tensor completion based on low rank and TV, and its optimization algorithm using a PDS approach. In Section V, we demonstrate the advantages of the proposed method over selected stateoftheart methods using color images, movies, and 3Dvolumetric images. Lastly, we state the conclusions in Section VI.
Ia Notations
The notations used in this paper follow several rules. A vector, a matrix, and a tensor are denoted by a bold lowercase letter,
, a bold uppercase letter, , and a bold calligraphic letter, , respectively. An thorder tensor, , can be transformed into a vector and matrix forms, which are denoted using the same character, and for , respectively. An element of is denoted by or . Operator represents the Hadamard product, defined as .Ii Review of prior works in matrix and tensor completion
Iia Matrix completion models
First, we consider a matrix completion problem as follow:
(4) 
which is a case of in (1). For lowrank matrix completion, ideally, we want to set , however, it is NPhard [14]. Thus, its convex relaxation, i.e. nuclearnorm, is used [28]:
(5) 
where is the
th largest singular value of
.In [5], a noisy case has been discussed as
(6) 
To solve Problem (6), an algorithm has been proposed, which requires solving
(7) 
multiple times to determine an appropriate value of such that
. As iterative calculations of singular value decomposition are required to solve Problem (
7) [26], the algorithm is computationally expensive. We refer to this algorithm as lowrank matrix completion with noise (LRMCn).There are several studies about applications of image deblurring, denoising, and interpolation
[29, 34, 16], where a cost function is given by TV. The standard TV for matrix is defined by(8)  
(9) 
Problem (4) with the nuclear norm and TV is discussed in [31], in which it was proposed to minimize the nuclear norm using singular value thresholding and TV using gradient descent, alternately. However, using standard gradientbased optimization is not appropriate because the nuclear norm and TV are not differentiable functions. An alternative efficient optimization approach referred to as ‘proximal splitting’ is gaining attention [10, 2].
IiB Tensor completion models
When in (1), it is not a simple extension of matrix completion because of special properties of tensors. For example, there are two types of ranks in tensors, i.e., the canonical polyadic (CP) rank and Tucker rank [22]. As the CP rank has several difficult properties, low Tuckerrank based completion is relatively well studied.
In [24, 25], a case of exact tensor completion, which is Problem (1), where the cost function is given by a tensor nuclear norm has been discussed, in which the tensor nuclear norm, , is defined by
(10) 
where represents the weight parameters for individual tensor modes, and is the th mode unfolded matrix of tensor . ADMM [3] has been employed for its minimization problem. Furthermore, its noisy scenario has been discussed in [13], which is formulated as Problem (2) with the tensor nuclear norm. We refer to this method as low nrank tensor completion (LNRTC).
In [17], a case of Problem (2), in which the cost function is given by generalized TV (GTV) has been discussed, where GTV is defined by the sum of the generalized matrix TV of individual modeunfolded matrices of a tensor as
(11) 
where represents the weight parameters for individual tensor modes, and for matrix is a GTVnorm, which is defined by
(12) 
where represents weight parameters, is the differential operator for direction , for example, , , , and . This convex optimization problem is solved using the ADMM in [17]. We typically consider for standard matrix TV (8). In contrast, is considered for GTV. When we consider and for all in GTV, it is given by . In this case, GTV is anisotropic with respect to modes in the tensors, which leads to corruption of diagonal edges.
Note that we can consider a more simple, straightforward, and isotropic tensorial extension of matrix TV, which is defined by
(13)  
(14) 
where is an weighted l2norm, and an th mode partial differential operator is defined by . Instead of , we consider in this paper.
Iii Proposed model
In this section, we propose a new model for tensor completion and denoising using a tensor nuclear norm and TV simultaneously. The proposed optimization problem is given by
s.t.  (15)  
where and are the weight parameters between the TV and nuclear norm terms, and the first constraint in (15) imposes all values of the output tensor to be included in a range, . The first and second constraints are convex and these indicator functions are given by
(16)  
(17) 
Using and , tensor completion problem (15) can be rewritten as
(18) 
As these four functions are not differentiable, traditional gradientbased optimization algorithms, e.g., the Newton method, cannot be applied. In Section IV, we introduce and apply an efficient approach, referred to as PDS, to solve proposed optimization problem (15).
Iiia Characterization of the proposed model
In this section, we explain the relationship between the proposed model and prior works introduced in Section II. There are three characterizations of the proposed model.
First, when , , , , , and , the proposed model can be characterized as LRMCn [5]. In contrast with LRMCn, which solves several convex optimization problems to tune , the proposed method can obtain its solution by solving only one convex optimization problem, and provides its tensorial extension. Moreover, proposed model includes Laplace distribution as noise model unlike LRMCn.
Second, when , , , and , the proposed method can be characterized as LNRTC [13]. In contrast with LNRTC, which employs the ADMM for solving a type of Problem (2), the proposed method employs the PDS algorithm for solving a type of Problem (3).
Third, when , , , and , the proposed model can be characterized as an isotropic version of GTV [17]
. In contrast with GTV, in which a problem is solved using the ADMM, which requires matrix inversion through the fast Fourier transform (FFT) and inverse FFT, the proposed method does not need to consider matrix inversion. Furthermore, the proposed method tunes the value of
instead of .Additionally, our model differs from a recent work proposed in [21] because it applies some constrained fixedrank matrix factorization models into individual modematricization of a same tensor in noiseless scenario. The problem is nonconvex and it is not designed for a noise reduction model.
Iv Optimization
Currently, two optimization methods named ADMM and PDS have attracted attentions in signal/image processing [3, 12, 10]. Two optimization methods can be creatively used, for example, ADMM can be efficiently used for lowrank matrix/tensor completion [23, 9], in contrast, PDS can be efficiently used for TV regularization [38, 8]. Main difference between those optimizations is that TV regularization may include a large matrix inversion in ADMM, by contrast, it can be avoided in PDS. Thus, PDS has been used for many TV regularization methods such as TV denoising/deconvolution [38], vectorial TV regularization [27], and total generalized variation in diffusion tensor imaging [33].
Iva Primaldual splitting algorithm
In this section, we introduce basics of PDS algorithm. The PDS [10] algorithm is a framework used to split an optimization problem including nondifferentiable functions into several suboptimization processes using proximal operators. First, we consider the following convex optimization problem
(19) 
where and are general convex functions, and is a linear operator (matrix). From the definition of convex conjugate:
(20) 
the following saddlepoint problem can be derived
(21) 
where is a convex conjugate of . In PDS method, we focus to solve (21) instead of (19).
For optimality of , at least the following conditions are satisfied:
(22)  
(23) 
where stands for subgradient of functions, and we consider primal and dual residual vectors as some and , respectively.
Based on subgradient descent/ascent method, natural update rules are given by
(24)  
(25) 
where and are update directions, and and are step size parameters.
When and are proximable functions, proximal gradient method can be introduced as
(26)  
(27) 
where proximal mapping is defined by
(28) 
Note that let us put , then we have
(29) 
Optimization algorithm using (26) and (27) is called as “ArrowHurwicz method” which is the original version of PDS method. A generalization of PDS method is given by
(30)  
(31) 
where is a some scalar. Usually, is chosen because that the fast convergence of is theoretically proved in contrast to of [8]. Therefore, Formulations (30)(31) are recognized as a standard version of PDS method, currently. Nonetheless, ArrowHurwicz method is practically competitive with standard PDS method that a experimental report exists in [8].
Using Moreau decomposition rule:
(32) 
Update rule (31) can be rewritten by
(33)  
(34) 
If proximal mapping of is more difficult to calculate or derive than that of , then update rule (33)(34) is a convenient choice to implement.
Thus, PDS can be applied into many convex optimization problems that the proximal mappings of and are given as analytically computable operations. Furthermore, above formulation can be easily extended into the composite optimization of multiple convex functions [10].
IvB Proposed algorithm
In this section, we apply the PDS algorithm to the proposed optimization problem. Introducing dual variables , , and , Problem (18) can be rewritten as
(35)  
s.t.  
where is the vectorized form of and is the linear differential operator of the th mode of the tensor. is the norm of the matrix, defined as for matrix . Algorithm 1 can be derived using the PDS framework in Problem (35). We refer to this algorithm as the “lowrank and TV (LRTV)–PDS” algorithm.
Note that the norm, the nuclear norm, and are clearly proximable functions whose calculations are given by
(36)  
(37)  
(38)  
(39) 
where with for , and are the left, centerdiagonal, and right matrices, respectively, of the singular value decomposition of . Proximal mappings of with Gaussian and Laplace noise models are provided by Sections IVC and IVD.
IvC Proximal mapping of Gaussian noise inequality
In this section, we consider the following problem:
(40) 
Focusing on the elements , those elements are independent with respect to inequality. Thus, for minimizing each element cost , we obtain
(41) 
Focusing on the elements , the optimization problem is given as
(42) 
where , , and are vectors consisting of all elements of , , and , respectively, that satisfy . The solution of (42) is given by a projection of on the sphere with center and radius , or by the that is in that sphere (see Fig. 3). We can consider two cases: (a) , and (b) . Thus, we obtain
(43) 
where . Combining (41) and (43), we obtain
(44) 
and the proximal mapping of with Gaussian noise model is given by
(45) 
where . Clearly, this computational complexity is linear with respect to the size of tensor .
IvD Proximal mapping of Laplace noise inequality
In this section, we consider the following problem:
(46) 
In the same way to (41), focusing on the elements , we obtain
(47) 
Focusing on the elements , the optimization problem is given as
(48) 
The solution of (48) is given by a projection of on the polyhedron with center , or the that is in that polyhedron (see Fig. 4). However, it can not be calculated analytically unlike Gaussian noise model. Generally, it is resolved by linear search problem. Let us consider Lagrange form of (48) as
(49) 
then the solution of (49) can be given by softthresholding:
(50) 
where , and . The linear search problem can be given by
(51) 
Finally, the solution of (48) is given as .
Next, we show an efficient algorithm to solve (51). The computational complexity of the proposed algorithm is only for sorting. First, the constraint in (51) can be rewritten by
(52) 
Next, we consider the sorting of as
(53) 
The left part of (52) can be illustrated by Fig. 6. Obviously, the left part is with . Next, when , the left part can be obtained by
(54) 
In general, for , the left part can be calculated by
(55) 
Fig. 6 helps us to understand above formulations. We have . Thus, we can find that satisfies by linear computational complexity of . Finally, optimal value of can be given by
(56) 
IvE Stepsize adaptation
From the theory of fixed point algorithm, global convergence of PDS method with sufficiently small stepsize has been proven [12, 8, 19]. However, small stepsize leads usually slow convergence, and optimal stepsize may not be constant, i.e., it may adaptively change in optimization process. Furthermore, appropriate balance between primal and dual stepsize parameters is not trivial. To tackle this issue, Goldstein et al. has been proposed a nice adaptation rule of primaldual stepsize parameters in 2015 [15]. By inheriting and improving Goldstein’s rule work, we proposed a new stepsize adaptation rule.
In this section, we consider to apply above stepsize adaptation rules into our new tensor completion model. It is very important for stepsize adaptation to consider primal and dual residual vectors:
(57)  
(58) 
Primal and dual vectors can be derived based on Eqs. (22), (23), and (29).
IvE1 Goldstein’s rule [15]
Here, we introduce an adaptation rule for primaldual stepsize of PDS proposed by Goldstein et al. in 2015. In order to balance and , primal and dual stepsize are adjusted as follow:

If , then , , and ,

If , then , , and ,
where , and . Moreover, to prevent too large stepsize, a backtracking condition is defined as
(59) 
where is a constant (typically ), and
(60)  
(61) 
and if , then and .
IvE2 Proposed rule
In contrast that Goldstein’s rule balances the primal and dual stepsizes based on inequality condition, the proposed rule is based on primaldual ratio
Comments
There are no comments yet.