Completion is a technique of filling missing elements of incomplete data using the values of reference (available) elements and the structural assumptions (priors) of data. We consider a general exact matrix/tensor completion problem as follows:
where and are the input and output -th order tensors, respectively, a cost function, , is used to evaluate (prior) structural assumptions, with is an index tensor that represents the missing and available elements of as and , respectively. A support set, , is defined as . When the missing and available elements are independent, completion is impossible. However, most real-world data have a few redundant properties that can be used for completion, such as symmetry, repetition, and sparsity. When the cost function is convex and proximable, Problem (1) can be solved by convex optimization methods such as alternating direction method of multipliers (ADMM)  and primal-dual splitting (hybrid gradient) (PDS/PDHG) method . In this paper, we refer to PDS/PDHG as PDS for simple. For example, Problem (1
) with matrix/tensor nuclear-norm[6, 7, 4, 24, 25], total variation (TV) [16, 11], and both cost functions  have been studied.
Next, we consider an ‘inexact’ matrix/tensor completion problem as follows:
where is a distance measure between and , and
is a trade-off parameter between a prior and a distance term. We refer to this as ‘Lagrange form’. When we assume Gaussian distribution for a noise model, Problem (2) with can be considered for completion and denoising. In similar way, Problem (2) with assumes Laplace distribution for a noise model. When both functions and are convex and proximable, it can be solved by convex optimization. For example, Problem (2) with matrix/tensor nuclear-norm [26, 13, 20], TV [29, 30, 17], and both cost functions  have been studied.
Here, we consider ‘inequality form’ of Problem (2) as follow:
where is a noise threshold parameter. Figure 1 illustrates the Problems (2) and (3) and these solutions for different values of and . In the Lagrange form, the minimal point of convex penalty is the solution for , is the solution for , and the solutions for draw a curve connecting two solution points for and . Generally, the good trade-off exists on the curve, and it may be a projected point onto the curve from the unknown original tensor . However, optimal value of is still unknown and has to be tuned from wide range . On the other hand, the range of can be more narrow in the inequality form when we assume the noise standard deviation is known. Let us put for Gaussian and for Laplace, then optimal value of may exist in . If directions between the additional noise tensor and the gradient of at are similar, then optimal would near to and would also near to . By contrast, if directions between the additional noise tensor and the gradient of at are very different, optimal would be small and would be far from .
Problems (2) and (3) are convertible with corresponding values of and , however, these corresponding values of and are difficult to know. Figure 2 shows the relationship between and in a tensor completion problem with nuclear-norm and Frobenius-norm minimization. Both Problems are linked by and which have one-to-one correspondence.
The problem here is that the inequality form is not easy to solve directly unlike Lagrange form. In , inequality form with is solved by iterative optimization of Lagrange form with to tune optimal trade-off which corresponds to . This is a critical issue to solve its optimization problem with inequality using convex optimization only once.
In this study, we propose a new optimization algorithm based on PDS  for convex optimization problems consisting of proximable functions and noise inequality constraints. For this purpose, we derive that the proximal mappings of noise inequality constraints based on Gaussian and Laplace distributions, which are not trivial, can be obtained using analytical calculation. Furthermore, to accelerate the optimization, we propose a new step-size adaptation method for PDS algorithm. For application, we define the cost function as a composition of tensor nuclear-norm and generalized TV, and conduct extensive experiments to show the advantages of the proposed methods.
Note that this work is an extension of prior study presented in conferences [36, 35]. The new contributions in this study are as follows: A generalized formulation and detailed explanation of the proposed models, derivation of a new proximal mapping for Laplace noise inequality, applying a new step-size adaptation method for acceleration, and additional experiments.
The remainder of this paper is organized as follows: In Section II, prior studies on matrix and tensor completion methods are reviewed. In Sections III and IV, we propose a new model for tensor completion based on low rank and TV, and its optimization algorithm using a PDS approach. In Section V, we demonstrate the advantages of the proposed method over selected state-of-the-art methods using color images, movies, and 3D-volumetric images. Lastly, we state the conclusions in Section VI.
The notations used in this paper follow several rules. A vector, a matrix, and a tensor are denoted by a bold lowercase letter,, a bold uppercase letter, , and a bold calligraphic letter, , respectively. An th-order tensor, , can be transformed into a vector and matrix forms, which are denoted using the same character, and for , respectively. An -element of is denoted by or . Operator represents the Hadamard product, defined as .
Ii Review of prior works in matrix and tensor completion
Ii-a Matrix completion models
First, we consider a matrix completion problem as follow:
where is the
-th largest singular value of.
In , a noisy case has been discussed as
To solve Problem (6), an algorithm has been proposed, which requires solving
multiple times to determine an appropriate value of such that
. As iterative calculations of singular value decomposition are required to solve Problem (7) , the algorithm is computationally expensive. We refer to this algorithm as low-rank matrix completion with noise (LRMCn).
There are several studies about applications of image deblurring, denoising, and interpolation[29, 34, 16], where a cost function is given by TV. The standard TV for matrix is defined by
Problem (4) with the nuclear norm and TV is discussed in , in which it was proposed to minimize the nuclear norm using singular value thresholding and TV using gradient descent, alternately. However, using standard gradient-based optimization is not appropriate because the nuclear norm and TV are not differentiable functions. An alternative efficient optimization approach referred to as ‘proximal splitting’ is gaining attention [10, 2].
Ii-B Tensor completion models
When in (1), it is not a simple extension of matrix completion because of special properties of tensors. For example, there are two types of ranks in tensors, i.e., the canonical polyadic (CP) rank and Tucker rank . As the CP rank has several difficult properties, low Tucker-rank based completion is relatively well studied.
where represents the weight parameters for individual tensor modes, and is the -th mode unfolded matrix of tensor . ADMM  has been employed for its minimization problem. Furthermore, its noisy scenario has been discussed in , which is formulated as Problem (2) with the tensor nuclear norm. We refer to this method as low n-rank tensor completion (LNRTC).
In , a case of Problem (2), in which the cost function is given by generalized TV (GTV) has been discussed, where GTV is defined by the sum of the generalized matrix TV of individual mode-unfolded matrices of a tensor as
where represents the weight parameters for individual tensor modes, and for matrix is a GTV-norm, which is defined by
where represents weight parameters, is the differential operator for direction , for example, , , , and . This convex optimization problem is solved using the ADMM in . We typically consider for standard matrix TV (8). In contrast, is considered for GTV. When we consider and for all in GTV, it is given by . In this case, GTV is anisotropic with respect to modes in the tensors, which leads to corruption of diagonal edges.
Note that we can consider a more simple, straightforward, and isotropic tensorial extension of matrix TV, which is defined by
where is an weighted l2-norm, and an -th mode partial differential operator is defined by . Instead of , we consider in this paper.
Iii Proposed model
In this section, we propose a new model for tensor completion and denoising using a tensor nuclear norm and TV simultaneously. The proposed optimization problem is given by
where and are the weight parameters between the TV and nuclear norm terms, and the first constraint in (15) imposes all values of the output tensor to be included in a range, . The first and second constraints are convex and these indicator functions are given by
Using and , tensor completion problem (15) can be rewritten as
As these four functions are not differentiable, traditional gradient-based optimization algorithms, e.g., the Newton method, cannot be applied. In Section IV, we introduce and apply an efficient approach, referred to as PDS, to solve proposed optimization problem (15).
Iii-a Characterization of the proposed model
In this section, we explain the relationship between the proposed model and prior works introduced in Section II. There are three characterizations of the proposed model.
First, when , , , , , and , the proposed model can be characterized as LRMCn . In contrast with LRMCn, which solves several convex optimization problems to tune , the proposed method can obtain its solution by solving only one convex optimization problem, and provides its tensorial extension. Moreover, proposed model includes Laplace distribution as noise model unlike LRMCn.
Second, when , , , and , the proposed method can be characterized as LNRTC . In contrast with LNRTC, which employs the ADMM for solving a type of Problem (2), the proposed method employs the PDS algorithm for solving a type of Problem (3).
Third, when , , , and , the proposed model can be characterized as an isotropic version of GTV 
. In contrast with GTV, in which a problem is solved using the ADMM, which requires matrix inversion through the fast Fourier transform (FFT) and inverse FFT, the proposed method does not need to consider matrix inversion. Furthermore, the proposed method tunes the value ofinstead of .
Additionally, our model differs from a recent work proposed in  because it applies some constrained fixed-rank matrix factorization models into individual mode-matricization of a same tensor in noiseless scenario. The problem is non-convex and it is not designed for a noise reduction model.
Currently, two optimization methods named ADMM and PDS have attracted attentions in signal/image processing [3, 12, 10]. Two optimization methods can be creatively used, for example, ADMM can be efficiently used for low-rank matrix/tensor completion [23, 9], in contrast, PDS can be efficiently used for TV regularization [38, 8]. Main difference between those optimizations is that TV regularization may include a large matrix inversion in ADMM, by contrast, it can be avoided in PDS. Thus, PDS has been used for many TV regularization methods such as TV denoising/deconvolution , vectorial TV regularization , and total generalized variation in diffusion tensor imaging .
Iv-a Primal-dual splitting algorithm
In this section, we introduce basics of PDS algorithm. The PDS  algorithm is a framework used to split an optimization problem including non-differentiable functions into several sub-optimization processes using proximal operators. First, we consider the following convex optimization problem
where and are general convex functions, and is a linear operator (matrix). From the definition of convex conjugate:
the following saddle-point problem can be derived
For optimality of , at least the following conditions are satisfied:
where stands for sub-gradient of functions, and we consider primal and dual residual vectors as some and , respectively.
Based on sub-gradient descent/ascent method, natural update rules are given by
where and are update directions, and and are step size parameters.
When and are proximable functions, proximal gradient method can be introduced as
where proximal mapping is defined by
Note that let us put , then we have
where is a some scalar. Usually, is chosen because that the fast convergence of is theoretically proved in contrast to of . Therefore, Formulations (30)-(31) are recognized as a standard version of PDS method, currently. Nonetheless, Arrow-Hurwicz method is practically competitive with standard PDS method that a experimental report exists in .
Using Moreau decomposition rule:
Update rule (31) can be rewritten by
Thus, PDS can be applied into many convex optimization problems that the proximal mappings of and are given as analytically computable operations. Furthermore, above formulation can be easily extended into the composite optimization of multiple convex functions .
Iv-B Proposed algorithm
In this section, we apply the PDS algorithm to the proposed optimization problem. Introducing dual variables , , and , Problem (18) can be rewritten as
where is the vectorized form of and is the linear differential operator of the -th mode of the tensor. is the -norm of the matrix, defined as for matrix . Algorithm 1 can be derived using the PDS framework in Problem (35). We refer to this algorithm as the “low-rank and TV (LRTV)–PDS” algorithm.
Note that the -norm, the nuclear norm, and are clearly proximable functions whose calculations are given by
where with for , and are the left, center-diagonal, and right matrices, respectively, of the singular value decomposition of . Proximal mappings of with Gaussian and Laplace noise models are provided by Sections IV-C and IV-D.
Iv-C Proximal mapping of Gaussian noise inequality
In this section, we consider the following problem:
Focusing on the elements , those elements are independent with respect to inequality. Thus, for minimizing each element cost , we obtain
Focusing on the elements , the optimization problem is given as
where , , and are vectors consisting of all elements of , , and , respectively, that satisfy . The solution of (42) is given by a projection of on the sphere with center and radius , or by the that is in that sphere (see Fig. 3). We can consider two cases: (a) , and (b) . Thus, we obtain
and the proximal mapping of with Gaussian noise model is given by
where . Clearly, this computational complexity is linear with respect to the size of tensor .
Iv-D Proximal mapping of Laplace noise inequality
In this section, we consider the following problem:
In the same way to (41), focusing on the elements , we obtain
Focusing on the elements , the optimization problem is given as
The solution of (48) is given by a projection of on the polyhedron with center , or the that is in that polyhedron (see Fig. 4). However, it can not be calculated analytically unlike Gaussian noise model. Generally, it is resolved by linear search problem. Let us consider Lagrange form of (48) as
then the solution of (49) can be given by soft-thresholding:
where , and . The linear search problem can be given by
Finally, the solution of (48) is given as .
Next, we consider the sorting of as
In general, for , the left part can be calculated by
Fig. 6 helps us to understand above formulations. We have . Thus, we can find that satisfies by linear computational complexity of . Finally, optimal value of can be given by
Iv-E Step-size adaptation
From the theory of fixed point algorithm, global convergence of PDS method with sufficiently small step-size has been proven [12, 8, 19]. However, small step-size leads usually slow convergence, and optimal step-size may not be constant, i.e., it may adaptively change in optimization process. Furthermore, appropriate balance between primal and dual step-size parameters is not trivial. To tackle this issue, Goldstein et al. has been proposed a nice adaptation rule of primal-dual step-size parameters in 2015 . By inheriting and improving Goldstein’s rule work, we proposed a new step-size adaptation rule.
In this section, we consider to apply above step-size adaptation rules into our new tensor completion model. It is very important for step-size adaptation to consider primal and dual residual vectors:
Iv-E1 Goldstein’s rule 
Here, we introduce an adaptation rule for primal-dual step-size of PDS proposed by Goldstein et al. in 2015. In order to balance and , primal and dual step-size are adjusted as follow:
If , then , , and ,
If , then , , and ,
where , and . Moreover, to prevent too large step-size, a backtracking condition is defined as
where is a constant (typically ), and
and if , then and .
Iv-E2 Proposed rule
In contrast that Goldstein’s rule balances the primal and dual step-sizes based on inequality condition, the proposed rule is based on primal-dual ratio