Simultaneous Inpainting and Denoising by Directional Global Three-part Decomposition: Connecting Variational and Fourier Domain Based Image Processing

by   Duy Hoang Thai, et al.
The University of Göttingen

We consider the very challenging task of restoring images (i) which have a large number of missing pixels, (ii) whose existing pixels are corrupted by noise and (iii) the ideal image to be restored contains both cartoon and texture elements. The combination of these three properties makes this inverse problem a very difficult one. The solution proposed in this manuscript is based on directional global three-part decomposition (DG3PD) [ThaiGottschlich2016] with directional total variation norm, directional G-norm and ℓ_∞-norm in curvelet domain as key ingredients of the model. Image decomposition by DG3PD enables a decoupled inpainting and denoising of the cartoon and texture components. A comparison to existing approaches for inpainting and denoising shows the advantages of the proposed method. Moreover, we regard the image restoration problem from the viewpoint of a Bayesian framework and we discuss the connections between the proposed solution by function space and related image representation by harmonic analysis and pyramid decomposition.



page 9

page 32

page 33

page 35

page 36

page 37

page 39

page 40


Directional Global Three-part Image Decomposition

We consider the task of image decomposition and we introduce a new model...

Entropy Based Cartoon Texture Separation

Separating an image into cartoon and texture components comes useful in ...

Imaging with Kantorovich-Rubinstein discrepancy

We propose the use of the Kantorovich-Rubinstein norm from optimal trans...

Nonlocal Patches based Gaussian Mixture Model for Image Inpainting

We consider the inpainting problem for noisy images. It is very challeng...

Generalized Intersection Algorithms with Fixpoints for Image Decomposition Learning

In image processing, classical methods minimize a suitable functional th...

Image Processing using Smooth Ordering of its Patches

We propose an image processing scheme based on reordering of its patches...

Convex Variational Image Restoration with Histogram Priors

We present a novel variational approach to image restoration (e.g., deno...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Image decomposition, variational calculus, inverse problems, image inpainting, image denoising, cartoon image, texture image, noise, residual image, feature extraction.

1 Introduction and Related Work

Image enhancement and image restoration are two superordinate concepts in image processing which encompass a plethora of methods to solve a multitude of important real-world problems [2, 3]

. Image enhancement has the goal of improving an input image for a specific application, e.g. in areas such as medical image processing, biometric recognition, computer vision, optical character recognition, texture recognition or machine inspection of surfaces

[4, 5, 6]. Methods for image enhancement can be grouped by the domain in which they perform their operations: Images are processed in the spatial domain or Fourier domain, or modified e.g. in the wavelet or curvelet domain [7]. Types of enhancement methods include contextual filtering, e.g. for fingerprint image enhancement [8, 9, 10], contrast enhancement, e.g. by histogram equalization [11], and image superresolution [12]. Image restoration is connected to the notion that a given input image suffers from degradation and the goal is restore an ideal version of it. Degradations are caused by various types of noise, missing pixels or blurring and their countermeasures are denoising, inpainting and deblurring. In general, one has to solve a linear or nonlinear inverse problem to reconstruct the ideal image from its given degraded version. Denoising aims to remove noise from an image and denoising methods include total variation minimization based approaches [13], the application of nonlocal means (NL-Means) [14] or other dictionaries of image patches for smoothing, and adaptive thresholding in the Wavelet domain [15]. Inpainting [16] is the filling-in of missing pixels from the available information in the image and it is applied for scratch removal from scanned photographs, for occlusion filling, for removing objects or persons from images (in image forgery [17] or for special effects), for filling-in of pixels which were lost during the transmission of an image or left out on purpose for image compression [18]. Deblurring [19] addresses the removal of blurring artifacts and is not in the focus of this paper.

Rudin, Osher, and Fatemi [20] pioneered two-part image decomposition by total variation (TV) regularization for denoising. Shen and Chan [21] applied TV regularization to image inpainting, called TV inpainting model. and they also suggested image inpainting by curvature-driven diffusions (CDD), see [22]. Starck et. al. [23] defined a model for two-part decomposition based on dictionary approach. Then, Elad et al. [24] applied this decomposition idea for image inpainting by introducing the indicator function in the norm of the residual, see Eq. (6) in [24]. Esedoglu and Shen [25] introduced two inpainting models based on the Mumford-Shah model [26] and its higher order correction - the Mumford-Shah-Euler image model. They also presented numerical computation based on the -convergence approximations [27, 28]. Shen et al. [29] proposed image inpainting based on bounded variation and elastica models for non-textured images.

Image inpainting can be an easy or difficult problem depending on the amount of missing pixels [22], the complexity of the image content and whether prior knowledge about the image content is available. Methods have been proposed which perform only cartoon inpainting (also referred to as structure inpainting) [21, 29, 30] or only texture inpainting [31]. Images which consist of both cartoon (structure) and texture components are more challenging to inpaint. Bertalmio et al. [32], Elad et al. [24] and Cai et al. [33] have proposed methods for inpainting which can handle images with both cartoon (structure) and texture components.

In this paper, we tackle an even more challenging problem. Consider an input image  which has the following three properties:

  • a large percentage of pixels in are missing and shall be inpainted.

  • the known pixels in are corrupted by noise.

  • contains both cartoon and texture elements.

The co-occurrence of noise and missing pixels in an image with cartoon and texture components increases the difficulty of both the inpainting problem and the denoising problem. A multitude of methods has been proposed for inpainting and denoising. Existing inpainting methods in the literature typically assume that the non-missing pixels in a given image contain only a small amount of noise or are noise-free, and existing methods for denoising typically assume all pixels of the noisy image are known. The proposed method for solving this challenging problem is inspired by the works of Efros and Leung [31], Bertalmio et al. [32], Vese and Osher [34], Aujol and Chambolle [35], Buades et al. [14] and Elad et al. [24], and it is based on the directional global three-part decomposition (DG3PD) [1]. The DG3PD method decomposes an image into three parts: a cartoon image, a texture image and a residual image. Advantages of the DG3PD model lie in the properties which are enforced on the cartoon and texture images. The geometric objects in the cartoon image have a very smooth surface and sharp edges. The texture image yields oscillating patterns on a defined scale which is both smooth and sparse. Recently, the texture images have been applied as a very useful feature for fingerprint segmentation [1, 36, 37].

Figure 1: Overview over the DG3PD image inpainting and denoising process.

We address the challenging task of simultaneous inpainting and denoising in the following way: The advanced DG3PD model introduced in the next section decomposes a noisy input image (with missing regions ) into cartoon , texture and residual components. At the same time, the missing regions of the cartoon component

are interpolated and the available regions of

are denoised by the advantage of multi-directional bounded variation. This effect benefits from the help of the indicator function in the measurement of the residual, i.e. in (1). However, texture is not interpolated due to the ”cancelling” effect of this supremum norm for residual in unknown regions. Therefore, the texture component is inpainted and denoised by a dictionary based approach instead. The DG3PD decomposition drives noise into the residual component which is discarded. The reconstruction of the ideal version of is obtained by summation of the inpainted and denoised cartoon and texture components (see Figure 1 for an visual overview).

Moreover, we uncover the link between the calculus of variations [38, 39, 40] and filtering in the Fourier domain [41] by analyzing the solution of the convex minimization in Eq. (2), i.e. roughly speaking the solution of the DG3PD inpainting model which can be understood as the response of the lowpass filter , highpass filter and bandpass filter and the unity condition is satisfied, i.e.

where is a coordinator in the Fourier domain. We observe that this decomposition is similar to wavelet or pyramidal decomposition scheme [42, 43, 44]. However, the basis elements obtaining the decomposition, i.e. scaling function and frame (or wavelet-like) function, are constructed by discrete differential operators (due to the discrete setting in minimizing (1)) which are referred to as wavelet-like operators in [45]. In particular,

  • the scaling function and wavelet-like function for the cartoon are from the effect of the multi-directional TV norm,

  • the scaling function and wavelet-like function to extract the texture are reconstructed by the effect of the multi-directional G-norm,

  • the effect of norm is to remove the remaining signal in the known regions of the residual (due to the duality property of ).

We also describe flowcharts to show that the method of variational calculus (or the DG3PD inpainting) is a closed loop pyramidal decomposition which is different from an open loop one, e.g. wavelet [46], curvelet [47], see Figure 12. By numerics, we observe that the closed loop filter design by the calculus of variation will result in lowpass, highpass and bandpass filters which are “unique” for different images, see Figure 15. We also analyze the DG3PD inpainting model from a view of the Bayesian framework and then define a discrete innovation model for this inverse problem.

The setup of paper is as follows. In Section 2, we describe the DG3PD model for image inpainting and denoising. In Section 3, we show how to compute the solution of the convex minimization in the DG3PD inpainting problem by augmented Lagrangian method. In Section 4, we describe the proposed method for texture inpainting and denoising. In Section 5, we compare the proposed method to existing ones (TVL2 inpainting, Telea [48] and Navier Stokes [49]). In Section 6, we consider our inverse problem from a statistical point of view, i.e. the Bayesian framework, to describe how to select priors for cartoon and texture . We analyze the relation between the calculus of variations and the traditional pyramid decomposition scheme, e.g. Gaussian pyramid, in Section 7. We conclude the study with Section 8. For more detailed notation and mathematical preliminaries, we refer the reader to [36, 50].

2 Inpainting by DG3PD

We define a method for a restoration of the original noisy/non-noisy image with a set of known missing regions . The proposed model is a generalized version of DG3PD [1] for the inpainting and denoising problem. This modification for our DG3PD-inpainting is inspired by [24].

In particular, the discrete image (size ) with missing regions is simultaneously decomposed into cartoon , texture and noise , especially, cartoon is interpolated on the missing regions due to the indicator function in the modified model.

A set of missing regions on a bounded domain is defined by the indicator function whose complement is

Instead of putting on norm of the residual (see Eq. (4) and (6) in [24]), we introduce for the residual in the DG3PD model with the point-wise multiplication operator , i.e. . We propose the DG3PD-inpainting as follows


Note that if there are no missing regions, i.e. , the minimization problem (2) becomes the DG3PD model [1]. Next, we discuss how to solve the DG3PD-inpainting model (2).

3 Solution of the DG3PD Inpainting Model

In this section we present a numerical algorithm for obtaining the solution of the DG3PD-inpainting model stated in (2). To simplify the calculation, we introduce three new variables:

and denote as the indicator function on the feasible convex set of (2), i.e.

Due to the constrained minimization problem in (2), the augmented Lagrangian method is applied to turn it into an unconstrained one as


where the Lagrange function is

Similar to [1], the alternating directional method of multipliers is applied to solve the unconstrained minimization problem (2). Its minimizer is numerically computed through iterations


and the Lagrange multipliers are updated after every step . We also initialize , where is a zero matrix.

In each iteration, we first solve the seven subproblems in the listed order: “-problem”, “-problem”, “-problem”, “-problem”, “-problem”, “-problem”, “-problem”, and then we update the five Lagrange multipliers, namely , see Algorithm 1.

In this section, we only present the solution of the “-problem” and “-problem” as well as the updated Lagrange multiplier . We refer the readers to [1] for the detailed explanation of the other subproblems and the other Lagrange multipliers.

The -problem: Fix , , , , , and


The solution of (4) is defined as a projection operator on a convex set , i.e. :


The -problem: Fix , , , , , and


The solution of (6) with the point-wise division operator is


and is a matrix of ones.

Updated Lagrange Multiplier :

Choice of Parameters

Due to an adaptability to specific images and a balance between the smoothing terms and updated terms for the solution of the above subproblems, the selection of parameters is described in [1] and is defined as

Figure 2: An ideal image (a) with cartoon and texture components. Missing pixels (b) are shown in white. Image (c) combines missing pixels and noise. .

The term in (2) serves as a good measurement for the amount of noise due to the substraction in Eq. (5), see [51, 36, 1] (thanks to the multiscale and multidirection properties of the curvelet transform and its duality). However, due to the indicator function in the noise measurement , the interpolated texture by DG3PD inpainting is almost in the residual, see Figure 3 (a), and its smoothed version by curvelet shrinkage (b). Because of the substraction operator in Eq. (5), these interpolated textures are canceled out in , see (c). Figure 3

(d) illustrates the estimated texture before the substraction in Eq. (

5) as

where ROI is the region of interest obtained by the morphological operator for texture image in [37].

Figure 3: The “canceling effect” is caused by the noise measurement .

In order to analyze the effects of the proposed model in terms of interpolation (for ) and decomposition, we consider a 1-dimensional signal (which is extracted along the red line in Figure 4 (a)-(d)). By the DG3PD inpainting, the mean values of the cartoon in (e) “nearly” remain unchanged. The homogeneous regions have “sharp” edges and a “stair-case” effect does not occur (thanks to the directional TV). Texture in (f) is extracted in areas which contain repeated pattern in (b). Moreover, small scales objects, e.g. noise, are removed from .

However, the term causes a “canceling effect” which removes the interpolated texture in unknown area during the decomposition process. Therefore, texture inpainting and denoised is tackled separately as described in the next section by a generalized version of the dictionary learning for texture synthesis [31].

Figure 4: Interpolation for 1-dimensional signal after DG3PD inpainting (without texture inpainting): (c) is an 1-dimensional signal extracted along the red line of the original image (a). (d) is its corrupted signal by Gaussian noise and a missing regions , see (b). Thanks to the directional TV norm, the DG3PD inpainting produces “sharp” edges without “stair-case” effect in a cartoon (e) and the mean values of “nearly” remains. Due to noise and the shrinkage soft-thresholding operator by in (2), texture is reconstructed with a “sharp” transition on areas where texture presents in the original signal (c). However, due to a “heavy” noise on the known domains in (e), the DG3PD fails to reconstruct a “full” texture. In general, this is a challenging problem to recover an oscillating (or weak) signal corrupted by heavy noise. (h) is a reconstructed image (without step of inpainting texture) and (i) is a quantized signal, i.e. truncate to [0, 255] and quantize.
   Initialization: .
  for  do
      1. Compute :
2. Update :
  end for
Algorithm 1 The Discrete DG3PD Inpainting Model
Choice of Parameters

4 Texture Inpainting and Denoising

Figure 5: All coefficients of the obtained texture component equal to zero are visualized by gray pixels in (a), positive coefficients are indicated by white pixels and negative coefficients by black pixels. (b) depicts the resulting segmentation. The mask of missing pixels shown in (c) is dilated and then intersected with (b), leading to the (white) pixels which are to be inpainted with texture in (d).
Figure 6: Images (a) to (e) visualize the inpainting progress (added to the cartoon component) from 0 to 100% in steps of 25%. Images (f) to (h) display the denoising results after one, five and 20 iterations, respectively.

The proposed method for reconstructing the texture component combines the following ideas in five subsequent steps.

  • Texture extraction by DG3PD [1].

  • Morphological processing [36, 37].

  • Texture inpainting inspired by the work of Efros and Leung [31].

  • Texture denoising by nonlocal means [14].

  • Image synthesis by summation with the cartoon component .

The DG3PD model enforces sparsity and smoothness on the obtained texture component . Sparsity means that the vast majority of coefficients in are equal to zero (the percentage of zero coefficients depends on how much texture a specific image contains). We make good use of this property to answer the question: which pixels of the missing shall be inpainted with texture? First, the texture component is segmented into zero (gray pixels in Figure 5(a)) and nonzero coefficients (black and white pixels in Figure 5(a)). Morphological processing (as described in [36, 37] for fingerprint segmentation) obtains texture regions (see Figure 5(b)). The mask of missing pixels shown in (c) is dilated (we used a circular structure element with 5 pixels radius in order to avoid border effects on the margin between existing and missing pixels) and then, it is intersected with the texture region segmentation to obtain the mask shown in Figure 5(d) which are the pixels to be inpainted with texture.

The proposed texture inpainting proceeds in two phases: First, we build a dictionary of texture patches and second, we inpaint all pixels of mask in a specific order. For the dictionary, we select all patches of size pixels (we used in our experiments) from texture component which meet the following two criteria: At least percent of the pixels are known (and additional, the central pixel of the patch has to exist) and at least percent of the coefficients are nonzero (we used and in our experiments). The first criterion excludes patches which contain too many missing (unknown) pixels and the second criterion excludes patches without texture from the dictionary. Next, we iterate over all pixels of mask and consider the pixel to be inpainted as the central pixel of an image patch with size . We count the number of known pixels inside the patch and compute the percentage of known pixels. Pixels are inpainted in the following order. We start with a threshold percentage which is decreased in steps of per iteration and in each iteration, we inpaint all pixels of mask for which at least percent are known. The rationale behind this ordering is that the more pixels are known (or already inpainted) in the neighborhood around a missing pixels, the better it can inpainted, because this additional information improves the chances of finding a good match in the texture dictionary. A third constraint ensures that the overlap of known pixels between a dictionary patch and the patch around the missing pixels contains at least percent of known pixels (we used in our experiments). Inpainting a pixel means that we find the best fitting patch in our texture dictionary and set the pixel value to the central pixel of that patch. Best fitting is defined as minimum sum of squared differences per pixel, divided by the number of pixels which overlap.

After all missing texture pixels have been inpainted the texture region is denoised iterations of nonlocal means. In each iteration, we first construct a dictionary considering all patches in the texture region with known and previously inpainted pixels. Next, for each pixel to be denoised, we find the top fitting patches in the dictionary using the same distance function (sum of squared pixelwise differences) and set the denoised pixel to the average value of the top central pixel values. In our experiments, we used and we have observed that after about iterations, the image has reached a steady state. See Figure 6 for a visualization of the status at several intermediate steps during the inpainting and denoising process.

Finally, the full image is reconstructed by summation of the inpainted cartoon component with the inpainted and denoised texture component , see Figure 9.

5 Comparison of DG3PD to Further Inpainting Methods

Figure 7 shows inpainting results for the considered challenging problem (see Figure 2 (c)) obtained by well-known inpainting methods from the literature: an approach based on Navier Stokes equations from fluid dynamics which has been proposed by Bertalmio et al. [49], an inpainting method suggested by Telea [48] and a well-known TVL2 inpainting approach with its synthesis image shown in Figure 7 (b) and (c).

The image shown in Figure 2(c) has the three properties discussed in Section 1:

  • a large percentage of pixels in are missing and shall be inpainted.

  • the known pixels in are corrupted by noise.

  • contains both cartoon and texture elements.

Furthermore, a noise-free, ideal version depicted in Figure 2(a) is available which serves as ground truth for evaluating the quality of inpainted and denoised images by different methods. The availability of a noise-free image is an advantage for comparing and evaluating different inpainting and denoising methods. In contrast, images taken by a digital camera contain a certain amount of noise and for them, an ideal version is not available, see e.g. the Barbara image we used in previous comparisons [1]. The choice of the ideal image (shown in Figure 2) enables to clearly see and understand the advantages and limitations of the compared methods (see Figure 7).

Due to the fidelity term, the -norm, in the TVL2 inpainting model, noise is reduced and the homogeneous areas are well interpolated. However, the method is known to cause the “stair case” effect on the homogeneous regions and texture, while small scale objects tend to be eliminated. If the parameter is chosen smaller (e.g. ), the resulting inpainted image is smoother, i.e. effect of “stair case” is reduced, but also more texture parts are removed.

DG3PD inpainting produces a smooth result without stair case effect. In comparison to the other approaches, it is the only method which can reconstruct the texture regions in a satisfactory way.

Figure 7: Comparison of inpainting results obtained by (a) Navier Stokes [49], by (b) Telea [48], by (c,d)TVL2 and by (e) DG3PD.

6 From the Bayesian Framework to the DG3PD Inpainting Model

In this section, we describe the DG3PD inpainting model from a view of the Bayesian framework through maximum a posterior (MAP) estimator. We assume that and

are independent random variables (r.v.).

  • Prior of cartoon image : A cartoon consists of element pixel which is considered as an independent r.v. with a prior .

    Given , denote . To describe the distribution of a r.v. , we firstly define the distribution of -dimensional r.v. . We assume that r.v.

    has a multi-dimensional Laplace distribution which is a part of multi-dimensional power exponential distribution

    [52, 53], i.e. with :

    Thus, the distribution of a r.v. is a multi-dimensional Laplace distribution with operator :

    The joint probability density function (p.d.f.) of a prior


    We choose . The potential function of in a matrix form is

    Note that the original is not a Laplace distribution, but a transform of under an operator has an independent multi-dimensional Laplace distribution.

  • Prior of texture image : Under a transform operator , the texture image is decomposed in different orientations. Operator is suitably chosen to capture the texture components in the original image . As definition of discrete multi-dimensional G-norm [1], the definition of the transform (in a direction , ) is defined by its inversion (note that ):

    We assume that a texture is sparse in a transform domain and sparse in the spatial domain (nonzero coefficients of are only due to texture). To satisfy these conditions, we assume that a r.v. has a mixture of a Laplace distribution in spatial domain with a p.d.f. (w.r.t. sparse in spatial domain) and multi-dimensional Laplace distribution (in an operator ) with a p.d.f. (w.r.t. sparse in transform domain ). Thus, we have a p.d.f. of r.v. as follows


    Since , we have


    To define the distribution of a r.v. , we define the distribution of -dimension r.v. . Similar to , we assume that r.v. has a multi-dimensional Laplace distribution, i.e. with . It is easy to see that (with )



    Put (9) and (10) to (8), we have the p.d.f. of r.v. as

    The joint p.d.f. of a texture image is

    we choose and . The potential function of with an anisotropic version in a matrix form is defined as

  • Likelihood (or the joint p.d.f. of ): If we assume that the residual in an image has a power exponential distribution [52, 53], i.e. , e.g. Gaussian or Laplacian , its density function at is

    Due to the inpainting problem with a missing region , the likelihood is defined on a known domain as

  • A posterior: Since and are independent to each other, a posterior is written as

Let , the MAP estimator, i.e. , is defined as

However, in practice, we do not know which types of noise are observed in a signal. Instead of the norm of the residual on a known domain to characterize the properties of noise, we control the smoothness of the solution by the maximum of curvelet coefficient of on a domain by a constant , i.e. .

In [54], if noise in image is Gaussian, as in extreme value theory, the r.v. has a Gumbel distribution. Thus, there is a condition to choose . However, in practice, if noise cannot be identified, is chosen by

-quantile. Note that the condition

is similar to the Dantzig selector [55].

Figure 8 illustrates the empirical density functions of the solution of the minimization (3). The statistical properties of the solution can be characterized in the Bayesian framework with the priors, likelihood and posterior as mentioned above.

Figure 8: The figure depicts the empirical density functions of the solution in (3) by ADMM: (a), (b), (c), (d), (e), (f) and (g) are the empirical density functions of and the QQ-plot of the residual , respectively. We assume that the prior of has a multi-Laplacian distribution, i.e. a changed variable has a multi-Laplace distribution, or has an univariate Laplace distribution. This effect causes the sparseness of due to the shrinkage operator, see (d). The sparsity for causes the smoothness of the objects in the cartoon , see (a). The smooth and sparse texture (due to the expansion of a whole range in intensity value of its non-zero coefficients caused by in (2)) is illustrated in (b). Because of an additive white Gaussian noise in a simulation, the distribution of in is approximately normal, see plot (c) and its QQ-plot (g). The reasons of the differences near the boundary in (g) may cause by: (1) a simulation of a Gaussian noise, i.e. the tail of a Gaussian noise does not go to infinity in a simulation. (2) the remaining texture in (due to a selection of ). Changing variable causes a non-sparsity in and a sparsity in (which is shrinked from ), c.f. (e) and (f).
Figure 9: Overview over the discrete innovation model for the DG3PD based image inpainting and denoising. Note that the does not exist in general, but the reconstructed image is obtained by the function space of the inverse problem.

Note that the DG3PD inpainting model (2) can be described as an inverse problem in image restoration (e.g. image inpainting and denoising), see Figure 9 for the discrete innovation model.

7 Relation Between Variational Analysis and Pyramid Decomposition

In this section, we discuss connections and similarities between the proposed DG3PD inpainting model and existing work in the area of pyramid representations, multiresolution analysis, and scale-space representation. Historically, Gaussian pyramids and Laplacian pyramids have been developed for applications like texture synthesis, image compression and denoising. Early works in this area include a paper by Burt and Adelson in 1983 [42] applying Laplacian pyramids for image compression. Pyramidal decomposition schemes [42, 43, 44] can be grouped into two categories. Their main property is either lowpass or bandpass filtering. Very related are transforms from multiresolution analysis, including wavelet, curvelet [47, 51], contourlet [57], and steerable wavelet [58]. A difference of the proposed DG3PD inpainting model is that its basis elements for obtaining the decomposition are constructed by discrete differential operators and the decomposition is solved in a non-linear way which adapts to each image (enabled by update iterations, corresponding to the loop in Figure 12). The discussion of commonalities and differences is made more precise and detailed in the following rest of the section.

Let denote

a discrete Fourier transform pair. Given

and and the Dirac delta function , the impulse response of the discrete directional derivative operators (