Recovery of Noisy Points on Band-limited Surfaces: Kernel Methods Re-explained

01/03/2018 ∙ by Sunrita Poddar, et al. ∙ 0

We introduce a continuous domain framework for the recovery of points on a surface in high dimensional space, represented as the zero-level set of a bandlimited function. We show that the exponential maps of the points on the surface satisfy annihilation relations, implying that they lie in a finite dimensional subspace. The subspace properties are used to derive sampling conditions, which will guarantee the perfect recovery of the surface from finite number of points. We rely on nuclear norm minimization to exploit the low-rank structure of the maps to recover the points from noisy measurements. Since the direct estimation of the surface is computationally prohibitive in very high dimensions, we propose an iterative reweighted algorithm using the "kernel trick". The iterative algorithm reveals deep links to Laplacian based algorithms widely used in graph signal processing; the theory and the sampling conditions can serve as a basis for discrete-continuous domain processing of signals on a graph.



There are no comments yet.


page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recovery of signals that lie on a manifold/surface has received extensive attention in the recent years. For example, patch-based image processing methods such as BM3D model the patches in an image as points on a manifold [1, 2], while we [3, 4] have recently used the manifold structure of images in a dynamic time series. Another area that witnessed extensive research in the recent years is the processing of signals on a graph [5]; these methods rely on the the graph Laplacian operator to denoise and post-process signals on a graph.

The main focus of this paper is to introduce a continuous domain perspective on the recovery of points drawn from a smooth surface in very high dimensions. This work reveals fundamental links between recent advances in superresolution theory [6, 7, 8]

and kernel based machine learning methods

[9] as well as graph signal processing [10]. We assume that the high dimensional points live on an smooth surface, which is the zero level set of a bandlimited function. This is termed the annihilation relation and it is shown that this relation can be expressed as a weighted linear combination of the exponential features of the point; the dimension of the feature maps is equal to the bandwidth of the potential function. These properties enable us to determine the sampling conditions, which will guarantee the recovery of the surface from finite number of points. Our analysis also shows that when the bandwidth is overestimated, there are multiple such annihilation relations, suggesting that the exponential feature maps of the points on the surface live in a finite dimensional space. Note that similar non-linear maps are widely used in kernel methods; our results show that these maps can be approximated by a few basis functions, when the points are restricted to a bandlimited surface.

The finite dimensional nature of the maps translate to a low-rank kernel matrix, computed from the points using a shift invariant kernel such as the Dirichlet function. We minimize the nuclear norm of the feature maps of the points to recover them from noisy data. Since the direct estimation of the surface in higher dimensions suffers from the curse of dimensionality, we use the ”kernel trick” to keep the computational complexity manageable. We rely on an iterative reweighted algorithm to recover the denoised points. The resulting algorithm has similarities to iterative non-local methods

[11, 2, 12, 13, 5] that are widely used in image processing and graph signal processing. Specifically, it alternates between the estimation of a graph Laplacian, which specifies the connectivity of the points, and the smoothing of points guided by the graph Laplacian. Our experiments show that the Laplacian matrix obtained by solving the proposed optimization algorithm is more representative of the graph structure than classical methods, when it is estimated from noisy data.

This work is built upon our prior work [14, 7, 15, 16, 17, 18] and the recent work by Ongie et al., which considered polynomial kernels [19]. Our main focus is to generalize [19] to shift invariant kernels, which are more widely used. We also introduce sampling conditions and algorithms to determine the surface, when the dimension is low. In addition, the iterative algorithm using the kernel trick shows the connections with graph Laplacian based methods used in graph signal processing.

2 Bandlimited surfaces & annihilation

We assume the point cloud to be supported on a surface in , which is the zero level-set of a bandlimited potential function:


Here, is the smallest set of coefficients (minimal set) that satisfies the above relation. is a set of contiguous locations that indicates the support of the Fourier series coefficients of . Consider an arbitrary point on the above surface (1). By definition (1), we have the annihilation relation . We re-express the annihilation relation as using a non-linear mapping :


This annihilation relation is illustrated in Fig 1.

Figure 1: Illustration of the annihilation relations in 2-D. We assume that the curve is the zero-level set of a bandlimited function . Each point on the curve satisfies , which can be seen as an annihilation relation in the non-linear feature space . Specifically, the maps of the points lie on a plane orthogonal to .

2.1 Curve recovery: sampling conditions

The annihilation relation introduced in the previous sub-section can be used to estimate the surface, or equivalently from a few number of points. The least square estimation of the coefficients from the data points can be posed as the minimization of the criterion:


where . The coefficients can be estimated as:


The solution is the minimum eigen vector of


In the remainder of the section, we will restrict our attention to 2-D for simplicity, even though the results in this section can be generalized to arbitrary dimensions. We will now determine the sampling conditions for the perfect recovery of the curve using (4). Specifically, we will determine the minimum number of samples for the successful recovery of the curve, when is a rectangular neighborhood in of size . In addition, we assume that is the function with the smallest Fourier support (minimal polynomial), whose zeros define the curve. We first focus on the case where is known.

Proposition 1.

Let be points on the zero-level set of a band-limited function , where the bandwidth of the surface is specified by and has irreducible factors. If points are sampled on the irreducible factor, then the curve can be uniquely recovered by (4), when:


for .

Thus, the total number of points required are . We compare this setting with the sampling conditions for the recovery of a piecewise constant image, whose gradients vanish on a bandlimited curve [7]. The minimum number of Fourier measurements required to recover the function there is ; when , then complex Fourier samples are required. In contrast, we need real samples. When the true support is not known, it is a common practice to overestimate it as . In this case, will have multiple null space vectors, as shown below.

Proposition 2.

We consider the polynomial described in Proposition 1. Let with and for :


points be sampled on the irreducible factor of . Then all nullspace vectors of the matrix will be of the form:


where is an arbitrary function such that .

Thus, the total number of points required are . Since is the common factor of all the annihilating functions, all of them will satisfy , for any point on the original curve. Depending on the specific , they will have additional zeros. Hence, the above result provides us a means to compute the original curve, even when the original bandwidth/support of the function is unknown.

We now consider a collection of points on the curve, stacked into a matrix . Let the feature matrix of size be denoted by:


We state a result about the rank of the above feature matrix.

Proposition 3.

We consider the polynomial described in Proposition 1 and . Then:


with equality if the sampling conditions of Proposition 2 are satisfied.

Here, denotes the number of valid shifts of the set within as shown in Fig 2 (a). Note that as gets smaller, the number of shifts of it within increases, and hence the rank decreases. The rank of the matrix can be used as a surrogate for the bandwidth of , or equivalently the complexity of the curve. Note that may be an irregular shape in . For example, if the points lie on a line in , then could be concentrated along a line in , resulting in a small , even when the number of features in may be considerably high. The low-rank structure of the feature maps can be used to denoise the original points, while the sum of squares function obtained from the nullspace filters can be used to estimate the surface in low-dimensions when (6) is satisfied, as illustrated in Fig 2.

Figure 2: Sampling conditions: The Fourier support of the minimal function , the overestimated support used to evaluate the maps, and the possible shifts of in denoted by

are shown in (a). (b) shows the phase transition plots, where the red curve is the one predicted by the theory, and the blue curve is

. Here, black indicates perfect recovery and white denotes poor recovery. The recovery of the curve with in (c) from its points denoted by red points is illustrated in (d)-(e). We assumed to be a region. (e) shows one null space filter. The sum of square of null space filters in (d) uniquely identifies the curve.

2.2 Recovery of noisy point clouds in high dimensions

The explicit approach of estimating the surface is feasible, when the dimension of the points is small. However, this approach suffers from the curse of dimensionality. Since the shape of the data, or equivalently the shape of the support is not known, one needs to use a large to ensure that . Note that the dimension of the feature space specified by grows exponentially with , making this approach impractical in applications involving point clouds of images or patches.

We hence rely on the right nullspace relations to recover the points from their noisy and undersampled measurements. Specifically, we are interested in the null space relations


where the entries of the Gram matrix are


The function in (11) is shift invariant and is dependent on the shape of . For example, when is a centered cube in , is a Dirichlet function. The kernel matrix satisfies , where is given by (9).

2.3 Dirichlet and Gaussian surface representation

The bandlimited function in (1) can equivalently be expressed as:


where is the Dirichlet function dependent on and is the set of sampled locations on the curve. Using reciprocity, the non-linear maps in this case can be shown to be:


Since the implicit curve is the zero level set of a linear combination of Dirichlet functions, it may be highly oscillatory. An alternative would be to use a level set expansion in terms of weighted exponentials , which could give smoother surfaces. In this case approaches a periodized Gaussian function, as , and the Gaussian kernel matrix is theoretically full rank. However, we observe that the Fourier series coefficients of a Gaussian function can be safely approximated to be zero outside , which translates to ; i.e., the rank will be small for high values of . We choose Gaussian kernels since they are more isotropic and less oscillatory than the Dirichlet kernel.

Figure 3: Illustration of denoising of 2-D points on a curve using (14): The top row denotes the noisy data, the first iteration of (15), and the iterate respectively. Note that the kernel low-rank algorithm provides good recovery of the points with 50 iterations. The algorithm also provides a robust approach to estimating the Laplacian from noisy data. The three columns correspond to the eigen vectors of the Laplacians (analogous to Fourier exponentials) estimated from the noisy data using Gaussian kernels, first iteration of the algorithm, and the iterate, respectively. We observe that all Laplacian estimation schemes provide good estimates of the 2nd eigen vector, while only the iterative strategy is able to provide good estimates of the higher ones (e.g. bottom row), demonstrating the benefit of the proposed scheme.

2.4 Denoising using nuclear norm minimization

We rely on the low rank structure of the kernel matrix to recover the noisy points. Specifically, with the addition of noise, the points deviate from the zero set of . A high bandwidth potential function is needed to represent the noisy surface. We propose to use the nuclear norm of the feature matrix as a regularizer in the recovery of the points from noisy measurements:


We use the IRLS algorithm, where is updated as:


and . Note that the solution for (15) involves a system of non-linear equations. Instead, we use gradient linearization to simplify our computations, where is a Gaussian kernel matrix:


with , , and


We note the equivalence of the above optimization strategy with widely used non-local means and graph optimization schemes. These schemes estimate a Laplacian matrix , followed by the minimization of the cost function (16). These approaches can thus be seen as fitting a smooth bandlimited surface to the point cloud of patches or signals that are assumed to be on the graph.

3 Results

We demonstrate the utility of (14) in a simple 2-D denoising example in Fig 3. Specifically, we consider the recovery of points on the TigerHawk logo from its noisy samples. See the caption for details. The top row shows that the proposed algorithm is able to provide good denoising of the data. The bottom three rows show that the Laplacian estimated using (17) at the iteration is more representative of the shape.

The utility of the proposed method in denoising free breathing and ungated MRI data is shown in Fig 4. Since MRI is a slow imaging modality, several rapid imaging techniques were introduced to accelerate the acquisition. All of these methods trade SNR for speed, resulting in noisy images. The proposed scheme is seen to exploit the manifold structure of the data to reduce noise. See caption for details.

Figure 4: Denoising a free breathing and ungated cardiac MRI image series: (a), (b) and (c) show the ground-truth, noisy data and denoised data respectively. Out of the 200 frames, two selected image frames are shown along with the temporal profile along the blue line.

4 Conclusion

We introduce a continuous domain framework for the recovery of points on a bandlimited surface. We show that the exponential maps of the points lie in a lower dimensional subspace, which translates to a kernel matrix that is low-rank. We introduce a nuclear norm minimization algorithm to recover the points. The proposed framework connects the continuous domain surface recovery problem with kernel methods and approaches in graph signal processing. The application of the algorithms to noisy data reveals its great utility in practical problems.


  • [1] A. Danielyan, V. Katkovnik, and K. Egiazarian, “Bm3d frames and variational image deblurring,” IEEE Tran. Image Processing, vol. 21, no. 4, pp. 1715–1728, April 2012.
  • [2] Yasir Q. Mohsin, Sajan Goud Lingala, Edward DiBella, and Mathews Jacob, “Accelerated dynamic mri using patch regularization for implicit motion compensation,” Magnetic Resonance in Medicine, vol. 77, no. 3, pp. 1238–1248, 2017.
  • [3] S. Poddar and M. Jacob, “Dynamic mri using smoothness regularization on manifolds (storm),” IEEE Tran. Medical Imaging, vol. 35, no. 4, pp. 1106–1115, April 2016.
  • [4] Sunrita Poddar, Sajan Goud Lingala, and Mathews Jacob, “Joint recovery of under sampled signals on a manifold: Application to free breathing cardiac mri,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 6904–6908.
  • [5] Vassilis Kalofolias, “How to learn a graph from smooth signals,” in

    Proc. Int. Conf. Artificial Intelligence and Statistics

    , 2016, vol. 51, pp. 920–929.
  • [6] Emmanuel J Candès and Carlos Fernandez-Granda,

    “Towards a mathematical theory of super-resolution,”

    Communications on Pure and Applied Mathematics, vol. 67, no. 6, pp. 906–956, 2014.
  • [7] Greg Ongie and Mathews Jacob, “Off-the-grid recovery of piecewise constant images from few fourier samples,” SIAM Journal on Imaging Sciences, vol. 9, no. 3, pp. 1004–1041, 2016.
  • [8] Geoffrey Schiebinger, Elina Robeva, and Benjamin Recht, “Superresolution without separation,” in Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015 IEEE 6th International Workshop on. IEEE, 2015, pp. 45–48.
  • [9] Bernhard Schölkopf and Alexander J Smola,

    Learning with kernels: support vector machines, regularization, optimization, and beyond

    MIT press, 2002.
  • [10] David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst,

    “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,”

    IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98, 2013.
  • [11] Z. Yang and M. Jacob, “Nonlocal regularization of inverse problems: A unified variational framework,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3192–3203, Aug 2013.
  • [12] Yasir Q Mohsin, Gregory Ongie, and Mathews Jacob, “Iterative shrinkage algorithm for patch-smoothness regularized medical image recovery,” IEEE transactions on medical imaging, vol. 34, no. 12, pp. 2417–2428, 2015.
  • [13] Guy Gilboa and Stanley Osher, “Nonlocal operators with applications to image processing,” Multiscale Modeling & Simulation, vol. 7, no. 3, pp. 1005–1028, 2008.
  • [14] Greg Ongie and Mathews Jacob, “Recovery of piecewise smooth images from few fourier samples,” in Sampling Theory and Applications (SampTA), 2015 International Conference on. IEEE, 2015, pp. 543–547.
  • [15] Greg Ongie, Sampurna Biswas, and Mathews Jacob, “Convex recovery of continuous domain piecewise constant images from nonuniform fourier samples,” IEEE Transactions on Signal Processing, vol. 66, no. 1, pp. 236–250, 2017.
  • [16] Arvind Balachandrasekaran, Vincent Magnotta, and Mathews Jacob, “Recovery of damped exponentials using structured low rank matrix completion,” IEEE transactions on medical imaging, vol. 36, no. 10, pp. 2087–2098, 2017.
  • [17] Arvind Balachandrasekaran and Mathews Jacob, “Novel structured low-rank algorithm to recover spatially smooth exponential image time series,” in Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017, pp. 1–4.
  • [18] Gregory Ongie and Mathews Jacob, “A fast algorithm for convolutional structured low-rank matrix recovery,” IEEE Transactions on Computational Imaging, 2017.
  • [19] Greg Ongie, Rebecca Willett, Robert D. Nowak, and Laura Balzano, “Algebraic variety models for high-rank matrix completion,” in Proceedings of the 34th International Conference on Machine Learning, 2017, pp. 2691–2700.