CURE: Curvature Regularization For Missing Data Recovery

01/28/2019 ∙ by Bin Dong, et al. ∙ Tsinghua University Peking University 0

Missing data recovery is an important and yet challenging problem in imaging and data science. Successful models often adopt certain carefully chosen regularization. Recently, the low dimension manifold model (LDMM) was introduced by S.Osher et al. and shown effective in image inpainting. They observed that enforcing low dimensionality on image patch manifold serves as a good image regularizer. In this paper, we observe that having only the low dimension manifold regularization is not enough sometimes, and we need smoothness as well. For that, we introduce a new regularization by combining the low dimension manifold regularization with a higher order Curvature Regularization, and we call this new regularization CURE for short. The key step of solving CURE is to solve a biharmonic equation on a manifold. We further introduce a weighted version of CURE, called WeCURE, in a similar manner as the weighted nonlocal Laplacian (WNLL) method. Numerical experiments for image inpainting and semi-supervised learning show that the proposed CURE and WeCURE significantly outperform LDMM and WNLL respectively.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Missing data recovery is a fundamental problem in imaging science and data analysis. It can be formulated as function interpolation in multiple dimension spaces. Let be an unknown function. We would like to acquire its values on a set of points . However, due to practical limitations, we are only able to observe its values on a subset . The goal of missing data recovery is to reconstruct the missing values of based on the observed values in . In this paper, we focus on two of the typical and important tasks of missing data recovery, i.e. semi-supervised learning and image inpainting, though it can be well applied to other related tasks as well.

Since the problem of missing data recovery is an under-determined inverse problem, we can only hope to recover the missing values of if we have certain prior knowledge on , e.g. belonging to a certain function class or having certain mathematical or statistical properties. Successful models include Rudin–Osher–Fatemi(ROF) model [34] and its variants [23, 4, 12], the applied harmonic analysis models such as wavelets [39, 17], curvelet [38], shearlet [20, 29] and wavelet frame [2, 9, 11, 10, 42, 19]

, the Bayesian statistics based methods

[33, 35, 43]; and the list goes on.

More recently, people started to use low dimension manifolds to describe the underlying relationship between the data points which serves as an effective geometric prior on the interpolant. For example, [31, 32] observed that image patches, regarded as data points in a high dimension space, often lie on a low dimension manifold; and [14, 44] allowed the data lie near (but may not be on) a certain low dimension manifold.

To harvest the low dimension property of data, [31] applied the following Dirichlet energy [45] to regularize the dimension of the embedded manifold

(1)

In [31], the authors gave a geometry view of the Dirichlet regularizer. They showed that when is the coordinate function of a manifold, we would have . This means that we can minimize the Dirichlet energy to enforce a penalty on the (local) dimensions of the underlying manifold. As a result, the authors referred to their method as the low dimension manifold model (LDMM). To recover missing data, they proposed to minimize the Dirichlet energy subject to the constraints , , where denotes the observed part of the underlying function .

1.1 Higher Order Regularization

However, having only low dimension structure of the manifold does not readily ensure smoothness of the reconstructed manifold which can lead to unsatisfactory results. As a simple demonstration, we show in Figure 1 a degenerated interpolation result from the two data points labeled in red. Although the interpolated surface is also a low dimension manifold, it is certainly not a smooth interpolation.

Figure 1: A low dimension manifold without curvature regularization.

In this paper, we overcome the problem by not only assuming low dimensionality of the manifold, but also the smoothness. For that, in addition to the Dirichlet energy, we further introduce a CUrvature REgularization (CURE) term via biharmonic extension. The proposed CURE energy reads as follows

where LDMM is given by (1). Note that regularizing the curvature by introducing higher order energy term has already been proposed in image processing [36]. However, to the best of our knowledge, we are the first to promote curvature-like regularization for nonlocal image processing. Furthermore, inspired by the weighted nonlocal Laplacian (WNLL) method proposed by [37] which can preserve symmetry of the Laplace operator, we propose a weighted CURE (WeCURE) model which can significantly improve the results over the CURE model. To demonstrate the effectiveness of CURE and WeCURE, we test our model on semi-supervised learning and image inpainting task. Numerical results show that CURE/WeCURE produces significantly better results than LDMM/WNLL in both tasks. A glimpse of the results for image inpainting is shown in Figure 2 where we can see the significant improvement of CURE over LDMM and WeCURE over WNLL. More details and numerical results can be found in later sections.

LDMM:PSNR=26.81dB, SSIM=0.68
WNLL:PSNR=28.73dB, SSIM=0.73
CURE:PSNR=28.97dB, SSIM=0.75
WeCURE:PSNR=29.78dB, SSIM=0.77
Figure 2: First row: original image, subsample, Zoomed In Ground Turth. Second row: LDMM, WNLL, CURE, WeCURE

1.2 Other Related Works

Nonlocal patch-based image restoration methods[15, 16, 7, 6, 23] have achieved great success in the literature. In addition, [21, 3, 18] also introduced different graph Laplacian based regularization on manifold and graphs. Our method, however, focuses on both smoothness and sparsity of the underlying data manifold. The most similar work to ours is [1], where the authors also introduced a higher order regularization for semi-supervised learning. The difference is threefold. First, we extend their method to image inpainting rather than semi-supervised learning. Secondly, we introduce a curvature perspective on the higher order regularization. Last but not least, the proposed weighted version of CURE, i.e. WeCURE, has significant performance boost over CURE in both image inpainting and semi-supervised learning.

Another approach to regularize the dimension of the manifold is through low rank matrix completion [24, 25]. The basic idea is to group the patches by similarity and penalized the rank/nuclear norm of the matrix obtained by reshaping the stack of the similar patches. The work in this paper reveals a benefit of PDE-based approaches that higher order information, such as curvature, can be naturally Incorporated into the model.

1.3 Organization of the Paper

The paper is organized as follows. The proposed CURE and WeCURE model is introduced in Section 2, Numerical comparisons of CURE and WeCURE with LDMM and WNLL for semi-supervised learning and image inpainting are presented in Section 3 and Section 4 respectively. Conclusions and summary are given in Section 5.

2 Curvature Regularization (CURE): Model and General Algorithm

In this section, we first propose the CURE model and a weighted version of CURE. Then, we will discuss how (We)CURE can be applied to missing data recovery in general.

2.1 Cure

Let be a smooth manifold embedded in and locally parameterized as

where is the local dimension of at , and . Let be the coordinate function on , i.e. for

To enforce smoothness of the underlying manifold, we further regularize the curvature of the manifold. Recall that the mean curvature of a manifold is defined as the trace of the second fundamental form [30], i.e. . Here

is the metric tensor defined by

. If the coordinate function is an isometric immersion, the mean curvature can be calculated as (see [30] for detail).

Now, we are ready to introduce the CURE energy in continuum setting:

where is given by (1). The gradient is commonly approximated by the nonlocal gradient in the discrete setting

where is a set with finite collection of points on the manifold . Then,

Here, is a given symmetric weight function which is often chosen to be a Gaussian weight =exp, where is a parameter and denotes the Euclidean norm in . The negative of the first variation of takes the form

which is the nonlocal Laplacian that has been used in image processing [5, 6, 21, 22]

. It is also called graph Laplacian in spectral graph and machine learning literature

[13, 45]. To simplify the notation, we use to denote the graph Laplacian [28, 40, 41]:

Now, the proposed CURE model can be cast as the following optimization problem in discrete setting

(2)

In [37], a weighted nonlocal Laplancian (WNLL) method was introduced to balance the loss at both labeled and unlabeled points and to preserve symmetry of the Laplace operator at the same time. Let be a set with labeled points. The WNLL model in the discrete setting is given by

where

and similarly for .

Following a similar idea as WNLL, we propose the weighted CURE model (WeCURE) in discrete setting

(3)

where

and similarly for .

2.2 CURE for Missing Data Recovery

For missing data recovery, we can simply minimize the CURE or WeCURE energy with respect to the constraints where is the observed values of the underlying function to be recovered. We discuss how this can be done in detail with WeCURE, and the algorithm for CURE is just a special case.

Recall the definition of the energy function of WeCURE (3) and notice that . Then, the WeCURE model for missing data recovery can be rewritten as

(4)

where with for and for , and is the matrix of graph Laplacian. The first variation of (4) is

Note that

Thus

Then, the problem (4) can be solved by solving the following Euler-Lagrange equation

(5)

where with . The above linear system of equations is symmetric positive definite, and can be solved by an iterative solver such as the conjugate gradient method. We note that, for the (non-weighted) CURE method, we only need to replace the matrix

above with the identity matrix

. We summarize the (We)CURE algorithm for missing data recovery in Algorithm 1.

Given point set and a partially labeled set , and given the function values of on , i.e. for .
A recovered function on .
Calculate the weight matrix and the graph Laplacian . Set .
Solving the linear system (5) for .
Algorithm 1 (We)CURE for Missing Data Recovery

3 CURE for Semi-Supervised Learning

Semi-supervised learning is a challenging and yet frequently encountered machine learning task. It can be formulated as a missing data recovery problem [45]. Given a data set , we assume there are totally different classes. Let be a subset of with labels, i.e

where is the subset with label . It is typical for semi-supervised learning that is far less than . The objective of semi-supervised learning is to extend labels to the entire data set . Our algorithm is summarized in Algorithm 2.

Figure 3: Some images in MNIST dataset. The whole dataset contains 70,000 2828 gray scale images.
Point set and a partially labeled set .
A complete label assignment
for do
Compute on with the known observation
by Algorithm 1.
end for
for do
Label as following
end for
Algorithm 2 (We)CURE for Semi-supervised Learning

We test LDMM,WNLL,CURE,WeCURE on the MNIST dataset [26] of handwritten digits classification [8]. Some sample images from the dataset are shown in Figure 3. The MNIST dataset contains 70,000 gray scale images of size 28 28 with 10 classes of digits going from 0 to 9. Each class contains 7,000 images. Each image can be seen as a point in a 784-dimension Euclidean space.

The weight function is constructed as

(6)

where is chosen to be the distance between and its 20th nearest neighbor. To make the weight matrix sparse, the weight is truncated to the 50 nearest neighbors.

In our test, we choose five different sampling rate to form the training set: labeling 700, 100, 70, 50 and 35 images in each class at random. For each sampling rate, we repeat the test 10 times. Figure 4 shows the success rate of WNLL, CURE, and WeCURE method. The first five images of Figure 4 show the success rate for each sampling rate, while the last image shows the average success rate for each of the five sampling rate. It can be clearly observed that the proposed CURE and WeCURE outperform WNLL for all the tested cases. With high sampling rate, the WeCURE method becomes closer to the CURE method and they have comparable performance, whereas WeCURE outperforms CURE in the cases with lower sampling rates. In terms of average success rate, both CURE and WeCURE outperform WNLL. We also compare (We)CURE with WNLL and Weighted Nonlocal Total Variation (WNTV) [27] in Table 1. It can be seen that (We)CURE significantly outperforms both WNLL and WNTV in cases with lower sample rates (50/70000,100/70000).

Figure 4: Comparisons of success rates by WNLL, CURE and WeCURE.
Method 50/70000 100/70000 700/70000
WNLL[37] 73.60 87.84 93.25
WNTV[27] 78.35 89.86 94.08
CURE 88.40 92.42 96.13
WeCURE 90.48 93.49 96.12
Table 1: Classification accuracy in percentage for MNIST. The best results are in red and the second best results are in blue.

4 CURE for Image Inpainting

In this section, we apply the CURE method to the reconstruction images with partially observed pixels. To apply (We)CURE, we adopt the assumption that image patches lie on a low dimension and smooth manifold. Given an image , for any , we define an image patch as

where we assume and

are odd integers and we adopt reflective boundary conditions for

near image boundary. Define the patch set as the collection of all patches:

Define a function on as

where is the intensity of image at pixel .

Now, suppose we only observe the image on a subset of pixels . We would like to recover the entire image from the observed data . This problem can be recast as the interpolation of function on the patch set with being given in , . This falls into the general algorithmic framework of (We)CURE for missing data recovery (Algorithm 2). Notice that the patch set is unknown. Thus, we need to update the patch set iteratively. We summarize the (We)CURE algorithm for this problem in Algorithm 3.

A subsample image
A recovered image
Generate initial image
while not converge do
1:Generate the semi-local patch set from current image and get corresponding labeled set
2:Update the image by computing on , with the known observation
by Algorithm 1.
3:
end while

Algorithm 3 Subsample image restoration By WeCURE

The weight function is chosen as (6). Here, are semi-local patches and is chosen to be the distance between and its 20th nearest neighbor. To make the weight matrix sparse, the weight is truncated to the 50 nearest neighbors. In the semi-local patches, the local coordinate is normalized to have the same amplitude as the image intensity,

with

and are the size of the image. The purpose of introducing semi-local patches is to constrain the search space to a local area. The larger leads to smaller search space let the searching quicker while smaller leads to global search and make more accurate results. Thus following [37] we gradually reduce by and initialization .

We apply our algorithm to 12 widely used testing images. In our experiment, we select the patch size to be

. For each patch, the nearest neighbors are obtained by using an approximate nearest neighbor (ANN) search algorithm. We use a k-d tree approach as well as an ANN search algorithm to reduce the computational cost. The linear system in weighted nonlocal Laplacian and graph Laplacian is solved by the conjugate gradient method. We use the solution of WNLL after 6 steps as the initialization of our algorithm to get a proper initial guess of the similarity relationships between different groups. The initial image of WNLL is obtained by filling the missing pixels with random numbers which satisfy a Gaussian distribution, where

is the mean of and

is the standard deviation of

.

Figure 5: Set12: 12 widely used testing images.

PSNR defined as following is used to measure the accuracy of the results

(7)

where is the ground truth.
SSIM is based on the computation of three terms, namely the luminance term, the contrast term and the structural term. The overall index is a multiplicative combination of the three terms.

(8)

where

(9)

where and are the local means, standard deviations and cross-covariance for image .

The numerical results are shown in Table 2 and Table 3.For qualitative comparisons, Figure 6 shows the inpainting results of 3 images from Set12 dataset at sample rate. Figure 7 shows the inpainting results at sample rate. As we can see, WeCURE gives much better results than WNLL both visually and numerically in PSNR and SSIM. The proposed method can enhance the recovered image to obtain a better texture, although it may generate some artifacts which breaks some smooth regions. At the same time (We)CURE also can let the restored images sharper on the edge. Thus (We)CURE always outperformes WNLL significantly in the SSIM sense.

Images C.man House Peppers Starfish Monarch Airplane Parrot Lena Barbara Boat Man Couple Average
Sample Rate 10%
LDMM 19.9329 24.8723 20.6103 19.9285 19.3395 19.9612 19.5449 26.1005 23.3176 22.6681 23.9415 22.7225 21.9117
WNLL 21.9993 28.3325 23.3210 22.2705 22.4218 21.7954 21.6121 28.5089 26.3732 24.8116 25.8126 25.0263 24.3571
CURE 21.7095 28.3023 23.3315 22.0185 22.0650 21.4078 21.5080 28.3013 26.3031 24.6798 25.7207 24.9033 24.1876
WeCURE 21.8571 28.7967 23.7416 22.3540 22.5829 21.4335 21.7753 28.7926 26.7155 25.0060 25.7145 25.1940 24.4970
Sample Rate 15%
LDMM 21.0948 26.4075 21.6434 20.9887 20.9843 21.0712 21.3412 27.7591 25.6175 23.8791 25.1269 24.0065 23.3267
WNLL 23.3052 29.1647 25.0635 23.5147 23.7171 22.7292 22.5851 29.5856 27.7837 25.8633 26.9433 26.2245 25.5400
CURE 22.8514 29.5745 25.1007 23.4509 23.8326 22.5211 22.4579 29.6253 27.7315 25.7653 26.9278 26.1798 25.5016
WeCURE 23.0993 30.9540 25.7840 24.0722 24.2587 22.8246 22.8708 30.1331 28.5615 26.2943 27.3484 26.7266 26.0773
Sample Rate 20%
LDMM 21.9057 28.2924 22.7767 22.6264 22.4175 22.1073 21.9409 28.9160 26.8121 24.8777 26.2350 25.0044 24.4927
WNLL 23.9478 30.8222 25.8068 24.5382 24.6738 23.8359 23.2844 30.5140 28.7357 26.6614 27.7806 26.7532 26.4462
CURE 23.7846 31.4606 25.7513 24.7232 24.8360 23.7147 23.5282 30.6271 28.9715 26.6736 27.8198 26.8165 26.5589
WeCURE 24.5007 32.1789 26.6428 25.3982 25.5151 24.1406 24.0625 31.3711 29.7794 27.3033 28.3473 27.4934 27.2278
Table 2: The PSNR(dB) results of different methods on Set12 dataset with sample rate , and . The best results are indicated in red and are highlighted in bold. The second best results are indicated in blue and are highlighted by underline.
Images C.man House Peppers Starfish Monarch Airplane Parrot Lena Barbara Boat Man Couple Average
Sample Rate 10%
LDMM 0.2677 0.3406 0.4406 0.3856 0.4870 0.3338 0.4560 0.4508 0.4881 0.3121 0.3469 0.3389 0.3874
WNLL 0.3557 0.4236 0.5681 0.5415 0.6523 0.4352 0.5680 0.5316 0.6308 0.4383 0.4787 0.5123 0.5113
CURE 0.3591 0.4337 0.5849 0.5382 0.6537 0.4324 0.5733 0.5356 0.6392 0.4409 0.4817 0.5240 0.5164
WeCURE 0.3726 0.4397 0.6042 0.5721 0.6842 0.4448 0.5953 0.5402 0.6572 0.4628 0.5051 0.5476 0.5355
Sample Rate 15%
LDMM 0.3622 0.4288 0.5308 0.4848 0.5986 0.4252 0.5464 0.5382 0.6164 0.4187 0.4483 0.4619 0.4884
WNLL 0.4456 0.5053 0.6380 0.6196 0.7076 0.5052 0.6247 0.5931 0.6964 0.5130 0.5544 0.5911 0.5828
CURE 0.4464 0.5294 0.6610 0.6294 0.7299 0.5115 0.6435 0.5994 0.7068 0.5226 0.5637 0.6067 0.5959
WeCURE 0.4577 0.5459 0.6766 0.6658 0.7473 0.5273 0.6621 0.6102 0.7275 0.5462 0.5939 0.6308 0.6159
Sample Rate 20%
LDMM 0.4385 0.5148 0.5980 0.5783 0.6692 0.5003 0.6074 0.5997 0.6840 0.5003 0.5295 0.5501 0.5642
WNLL 0.4970 0.5735 0.6856 0.6691 0.7439 0.5684 0.6673 0.6376 0.7373 0.5722 0.6062 0.6364 0.6329
CURE 0.5063 0.6044 0.7051 0.6889 0.7687 0.5847 0.6850 0.6457 0.7515 0.5882 0.6203 0.6571 0.6505
WeCURE 0.5270 0.6167 0.7241 0.7214 0.7859 0.6009 0.7017 0.6570 0.7683 0.6093 0.6492 0.6806 0.6702
Table 3: The SSIM results of different methods on Set12 dataset with sample rate , and . The best results are indicated in red and are highlighted in bold. The second best results are indicated in blue and are highlighted by underline.

5 Conclusion and Future Work

In this paper, we proposed to use both low dimension and smoothness of the underlying data manifold as a regularizer for missing data recovery. For that, we introduced curvature regularization (CURE) and a weighted version of it (WeCURE). Comparing to related models such as LDMM, WNLL and WNTV, the new regularization was proven more effective on some datasets for semi-supervised learning and image inpainting.

There are still a lot need to be further studied. For modelling, a natural question is whether other curvatures can also serve as smoothness regularizer for data manifolds and how are they different from the one we chosen for CURE? Can these curvatures be easily computed? How does CURE work for other tasks of missing data recovery? Furthermore, convergence analysis of solving the Biharmonic equation (5) on manifold also needs to be studied. Due to lack of understanding of the numerical methods for Biharmonic equation, it prohibited us from generalizing CURE to generic inverse problems.

Acknowledgments

Bin Dong is supported in part by NSFC 11671022 and Beijing Natural Science Foundation (Z180001). Yiping Lu is supported by the Elite Undergraduate Training Program of the School of Mathematical Sciences at Peking University. We thank Dr. Wei Zhu kindly share their valuable comments and codes of both LDMM and LDMM+WGL for comparisons.

References

  • [1] S. Agarwal, K. Branson, and S. Belongie, Higher order learning with graphs, in Proceedings of the 23rd international conference on Machine learning, ACM, 2006, pp. 17–24.
  • [2] C. Bao, B. Dong, L. Hou, Z. Shen, X. Zhang, and X. Zhang, Image restoration by minimizing zero norm of wavelet frame coefficients, Inverse problems, 32 (2016), p. 115004.
  • [3] A. L. Bertozzi and A. Flenner,

    Diffuse interface models on graphs for classification of high dimensional data

    , Multiscale Modeling & Simulation, 10 (2012), pp. 1090–1118.
  • [4] K. Bredies, K. Kunisch, and T. Pock, Total generalized variation, SIAM Journal on Imaging Sciences, 3 (2010), pp. 492–526.
  • [5] A. Buades, B. Coll, and J.-M. Morel, Neighborhood filters and pde’s, Numer. Math, 105, p. 1–34.
  • [6] A. Buades, B. Coll, and J.-M. Morel, A review of image denoising algorithms, with a new one. multiscale model, Simul, 4, p. 490–530.
  • [7] A. Buades, B. Coll, and J.-M. Morel, A non-local algorithm for image denoising

    , in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, IEEE, 2005, pp. 60–65.

  • [8] C. Burges, Y. LeCun, and C.,

    Cortes. mnist database

    .
  • [9] J.-F. Cai, R. H. Chan, and Z. Shen, Simultaneous cartoon and texture inpainting, Inverse Probl. Imaging, 4 (2010), pp. 379–395.
  • [10] J.-F. Cai, S. Osher, and Z. Shen, Split bregman methods and frame based image restoration, Multiscale modeling & simulation, 8 (2009), pp. 337–369.
  • [11] R. H. Chan, T. F. Chan, L. Shen, and Z. Shen, Wavelet algorithms for high-resolution image reconstruction, SIAM Journal on Scientific Computing, 24 (2003), pp. 1408–1432.
  • [12] T. Chan, A. Marquina, and P. Mulet, High-order total variation-based image restoration, SIAM Journal on Scientific Computing, 22 (2000), pp. 503–516.
  • [13] F. Chung, Spectral graph theory, American Mathematical Society.
  • [14] R. R. Coifman and S. Lafon, Diffusion maps, Applied and computational harmonic analysis, 21 (2006), pp. 5–30.
  • [15] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, Image denoising with block-matching and 3d filtering

    , in Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, vol. 6064, International Society for Optics and Photonics, 2006, p. 606414.

  • [16] A. Danielyan, V. Katkovnik, and K. Egiazarian, Bm3d frames and variational image deblurring, IEEE Transactions on Image Processing, 21 (2012), pp. 1715–1728.
  • [17] I. Daubechies, Ten lectures on wavelets, vol. 61, Siam, 1992.
  • [18] B. Dong, Sparse representation on graphs by tight wavelet frames and applications, Applied and Computational Harmonic Analysis, 42 (2017), pp. 452–479.
  • [19] B. Dong and Z. Shen, Mra-based wavelet frames and applications: Image segmentation and surface reconstruction, in Independent Component Analyses, Compressive Sampling, Wavelets, Neural Net, Biosystems, and Nanoengineering X, vol. 8401, International Society for Optics and Photonics, 2012, p. 840102.
  • [20] G. Easley, D. Labate, and W.-Q. Lim, Sparse directional image representations using the discrete shearlet transform, Applied and Computational Harmonic Analysis, 25 (2008), pp. 25–46.
  • [21] G. Gilboa and S. Osher, Nonlocal linear image regularization and supervised segmentation, Multiscale Model. Simul, 6, p. 595–630.
  • [22] G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Model. Simul, 7, p. 1005–1028.
  • [23] G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Modeling & Simulation, 7 (2008), pp. 1005–1028.
  • [24] S. Gu, L. Zhang, W. Zuo, and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2862–2869.
  • [25] R. Lai and J. Li, Manifold based low-rank regularization for image restoration and semi-supervised learning, Journal of Scientific Computing, 74 (2018), pp. 1241–1263.
  • [26] Y. LeCun, The mnist database of handwritten digits, http://yann. lecun. com/exdb/mnist/, (1998).
  • [27] H. Li, Z. Shi, and X.-P. Wang, Weighted nonlocal total variation in image processing, arXiv preprint, arXiv:1801.10441, (2019).
  • [28] Z. Li and Z. Shi, A convergent point integral method for isotropic elliptic equations on point cloud, SIAM: Multiscale Modeling Simulation, 14, p. 874–905.
  • [29] W.-Q. Lim, The discrete shearlet transform: a new directional transform and compactly supported shearlet frames., IEEE Trans. Image Processing, 19 (2010), pp. 1166–1180.
  • [30] F. Manfio and F. Vitório, Minimal immersions of riemannian manifolds in products of space forms, Journal of Mathematical Analysis and Applications, 424 (2015), pp. 260–268.
  • [31] S. Osher, Z. Shi, and W. Zhu, Low dimensional manifold model for image processing, technical report, cam report 16-04, UCLA.
  • [32] G. Peyré, Manifold models for signals and images, Computer Vision and Image Understanding, 113 (2009), pp. 249–260.
  • [33] S. Roth and M. J. Black, Fields of experts: A framework for learning image priors, in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, IEEE, 2005, pp. 860–867.
  • [34] L. I. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: nonlinear phenomena, 60 (1992), pp. 259–268.
  • [35] Q. Shan, J. Jia, and A. Agarwala, High-quality motion deblurring from a single image, in Acm transactions on graphics (tog), vol. 27, ACM, 2008, p. 73.
  • [36] J. Shen, S. H. Kang, and T. F. Chan, Euler’s elastica and curvature-based inpainting, SIAM journal on Applied Mathematics, 63 (2003), pp. 564–592.
  • [37] Z. Shi, S. Osher, and W. Zhu, Weighted nonlocal laplacian on interpolation from sparse data, Journal of Scientific Computing, 73 (2017), pp. 1164–1177.
  • [38] J.-L. Starck, E. J. Candès, and D. L. Donoho, The curvelet transform for image denoising, IEEE Transactions on image processing, 11 (2002), pp. 670–684.
  • [39] M. Stephane, A wavelet tour of signal processing, The Sparse Way, (1999).
  • [40] N. G. Trillos and D. Slepčev, Continuum limit of total variation on point clouds, Archive for rational mechanics and analysis, 220 (2016), pp. 193–241.
  • [41] N. G. Trillos and D. Slepčev,

    A variational approach to the consistency of spectral clustering

    , Applied and Computational Harmonic Analysis, 45 (2018), pp. 239–281.
  • [42] Y. Zhang, B. Dong, and Z. Lu, minimization for wavelet frame based image restoration, Mathematics of Computation, 82 (2013), pp. 995–1015.
  • [43] S. C. Zhu and D. Mumford, Prior learning and gibbs reaction-diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 (1997), pp. 1236–1250.
  • [44] W. Zhu, Q. Qiu, J. Huang, R. Calderbank, G. Sapiro, and I. Daubechies, Ldmnet: Low dimensional manifold regularized neural networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [45] X. Zhu, Z. Ghahramani, and J. Lafferty, Semi-supervised learning using gaussian fields and harmonic functions, in Proceedings of The 31st International Conference on Machine Learning, vol. 3, p. 912–919.