Optical Fringe Patterns Filtering Based on Multi-Stage Convolution Neural Network

01/02/2019 ∙ by Bowen Lin, et al. ∙ NetEase, Inc 2

Optical fringe patterns are often contaminated by speckle noise, making it difficult to accurately and robustly extract their phase fields. Thereupon we propose a filtering method based on deep learning, called optical fringe patterns denoising convolutional neural network (FPD-CNN), for directly removing speckle from the input noisy fringe patterns. The FPD-CNN method is divided into multiple stages, each stage consists of a set of convolutional layers along with batch normalization and leaky rectified linear unit (Leaky ReLU) activation function. The end-to-end joint training is carried out using the Euclidean loss. Extensive experiments on simulated and experimental optical fringe patterns, specially finer ones with high density, show that the proposed method is superior to some state-of-the-art denoising techniques in spatial or transform domains, efficiently preserving main features of fringe at a fairly fast speed.



There are no comments yet.


page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optical interferometric techniques have been widely used in scientific research and engineering for its simple optical devices and the ability to provide high resolution and full field measurements in a non-contact mode 1 ; 2 , such as electronic speckle pattern interferometry (ESPI). Noise will unavoidably appear in the process of formation and acquisition of optical fringe patterns. In principle, a noisy fringe pattern can be modeled as


where is the coordinate of a pixel, , , and are the background intensity, fringe amplitude, phase distribution and image noise, respectively 2 ; 5 . The existence of speckle noise seriously affects the accurate phase extraction which is very important for the success of fringe pattern demodulation. As the frequency spectrum of fringes and noise usually superimpose and cannot be separated clearly, simple filters have a strong blurring effect on fringe features, especially for fine ones with high-density. Therefore, it is challenging to suppress the complicated speckle noise in the optical fringe pattern while preserving the features.

Figure 1: Network architecture of proposed FPD-CNN for fringe pattern denoising.

Generally, various algorithms previously suggested for speckle noise reduction of optical fringe pattern can be categorized as methods in spatial domain or transform domain. For the transform domain techniques, Fourier transform

3 , windowed Fourier transform(WFF) 4 , and wavelet transform 6 , have been applied with inspiring performance. For the spatial domain techniques, filtering along fringe orientation 7 is an effective option, for example, Yu et al. proposed spin filters 8 to smooth fringe patterns along the local tangent orientation. Tang et al.

proposed a technique using second-order oriented partial differential equations for diffusion along the fringe orientation

9 . Wang et al. proposed a coherence enhancing diffusion (CED) for optical fringe pattern denoising 10 . This technique controls the smoothing both along and perpendicular to the fringe orientation. This principle is later developed for phase map denoising 11 . While in 12 , Fu et al.

developed a nonlocal self-similarity filter, which averages similar pixels searched in whole image instead of in a local fringe direction as the spin filters do. Involving more pixels with higher similarity levels and being free of the fringe orientation estimation, this simple algorithm has stronger robustness against noise with better filtering results, especially in the processing of fine optical fringe pattern. Optical fringe pattern denoising algorithms based on variational image decomposition technology have also been proposed in the past few years, such as ones in

13 and 14 . The basic idea of these methods is to decompose an image into low-density fringes, high-density fringes and noise part, where each part is modeled by different function spaces. To this end, an optimization functional is established by combining the corresponding norms in uncorrelated space and each part is filtered by minimizing this functional. However, when facing finer optical fringe patterns with high-density contaminated by complicated speckle noise, such as fringes with the width of only few pixels in some high-density/high-frequency regions, and even the case of no more than 5 pixels, most of existing methods are difficult to trade off better. Although the method based on the diffusion equation in the spatial domain is faster, it will produce severe blurring effects in the high-density region, such as the CED method 10 for its inaccurate estimation of the fringe orientation. Another important issue is that the geometry of the fringe on boundary is easily distorted. The methods in transform domain often have high computational cost and produce artifacts easily, such as the WFF method 4 , and the parameter adjustments are also complicated.

Deep-learning-based methods have produced state-of-the-art results on various image processing tasks such as image restoration 15 ; 16 ; 17 ; 18 ; 19 ; 20

and super resolution

21 . However, to the best of our knowledge optical fringe patterns denoising methods based on deep learning have not been extensively studied in the literature. Motivated by recent advances in deep learning, we proposed a optical fringe pattern denoising method based on convolutional neural network (FPD-CNN), and directly estimate speckle noise using the input images based on the observation model (1). The proposed method is divided into multiple stages, each of which consists of several convolutional layers along with batch normalization 22 and Leaky ReLU 23 activation function (see Fig.1). We use the efficient residual learning technique 24

to directly estimate noise from noisy image. It is trained in an end-to-end fashion using the Euclidean loss function. One of the main advantages of using deep-learning-based techniques for fringe denoising is that they learn parameters for image restoration directly from the training data rather than relying on predefined image priors or filters. Therefore, the proposed method based on CNN can be regarded as an adaptive method that relies on large-scale data. In the remainder of the article we will introduce the proposed method, the experiment results with analysis and the conclusion.

2 Proposed Method

2.1 Architecture of FPD-CNN

For the design of our network architecture, the FPD-CNN method catains T stages of noise estimation in the network and each stage has layers that means the proposed architecture is deeper. Then the denoisied fringe is obtained by simply subtracting the estimated noise from noisy fringe pattern. Because the noise estimated by single stage is not accurate, and often still contains some structural details of the fringe, a multi-stage strategy is adopted to further reduce the error, which is similar to an iterative mechanism. The estimated noise at each stage serves as the input for the next stage. As shown in Fig.1, the architecture of each stage t is described as follows:
 (i) Layer (light blue layer) includes the spatial template convolution filtering (Conv) and the Leaky ReLU operations, where 64 filters of size are used to generate 64 feature maps, and the Leaky ReLU is then utilized for nonlinearity. Here 1 indicates that the data to be processed is one-dimensional.
 (ii) Layers (blue layers) includes the convolution, the batch normalization (BN) and the Leaky ReLU operations, where 64 filters of are used, and the batch normalization is added to the middle of the convolution and the Leaky ReLU.
 (iii) Layer (light blue layer) includes only the convolution operation, where single filter of size is used to reconstruct the output.

Furthermore, the convolution uses appropriate zero padding to ensure that the output of each layer has the same dimension as the input image. The batch normalization is added to alleviate the internal covariate shift by incorporating a normalization step and a scale and shift step before the Leaky ReLU operation in each layer. Finally, the Leaky ReLU function is defined as


which is a pixel by pixel operation. When , it will degenerate into the ReLU function. The Leaky ReLU activation function inherits the advantages of ReLU in two aspects 23 : the first is to effectively solve the so called “exploding/vanishing gradient”; the second is to accelerate the convergence speed. At the same time, it can propagate negative gradients forward which can reduce the risk of falling into local optimum.

2.2 Mathematical Principle of FPD-CNN

The purpose of training the network in each stage is actually to solve the following regularization model as mentioned in 16 :


Here, is a noisy observation, is the ground truth fringe image, is the Frobenius norm, denotes the image size, is a regularization parameter, stands for the convolution of the image with the - filter kernel, and represents the - penalty function. The solution of Eq.(3) can be interpreted as performing one gradient descent process at the starting point , given by


where is the adjoint filter of (i.e., is obtained by rotating the filter 180 degrees), is a step size and (derivative of with respect to , the influence function can be regarded as pointwise nonlinearity operator applied to convolution feature maps), replaced by Leaky ReLU activation function in our model. Eq.(4) is equivalent to the following formula:


where is the estimated residual of with respect to . Hence, we adopt the residual learning formulation to train a residual mapping defined as


where is the network parameters, including filters and batch normalization in each layer which are updated with data training. The residual is regarded as the noise estimated from , and the approximate clean fringe image can be obtained by


Suppose that represents clean-noisy training image pairs, then the Euclidean loss function for each pixel is defined as


Therefore, the training task for learning parameters can be considered as solving the optimization problem:


Eq.(9) can be regarded as a two-layer feed-forward CNN. We apply the back-propagation algorithm to solve the problem (Eq.(9)). Essentially, according to the theory of numerical approximation, the proposed method actually trains a nonlinear function (i.e.,) to approximate the intensity statistical distribution of noisy pixels.

3 Experimental Results With Analysis

In order to train our model, we use Eq.(1) to simulate 200 clean-noisy image pairs. The detailed process of simulating images can be found in references 2 ; 5 . Partial sample image pairs used for training our network are shown in Fig.2. In this article we set the intensity range of the image pixels to be . Of course, we can also set the pixel range of different peaks. The size of each image is . To avoid over fitting to some extent data augmentation is used, such as horizontal inversion, rotation transformation and other geometric

Figure 2: Partial sample image pairs simulated by the optical fringe pattern model ( Eq.(1) ) for training our network.

transformation methods. Then, 230400 clean-noisy patch pairs with the patch size of are cropped to train the model. The weights are initialized by the method in 25

, and the whole network is trained using the optimization method of the adaptive moment estimation (ADAM)


. With a batch size of 64 and a learning rate of 1e-3, we just trained 30 epochs for our FPD-CNN. For the light blue layer in Fig.

1, namely the -th layer of each stage, the slope of Leaky ReLU is set to be 0.05; for the remaining layers is 0.5. and are set to be 3 and 8, respectively. The total number of layers is actually 24. Experiments are carried out in MATLAB R2017b using MatConvNet toolbox, with an Intel Xeon E5-2670 CPU 2.6GHz, and a Nvidia GTX1080 GPU. It takes about 24 hours to train FPD-CNN. We introduce experimental results from three aspects as follows.

3.1 Ablation Study

According to the experimental results obtained in 23 : on small datasets, the Leaky ReLU activation function and its variants consistently outperform the ReLU function in

Figure 3: Comparison of different activation functions (from left to right): the experimentally obtained optical fringe pattern, results of proposed FPD-CNN using ReLU and Leaky ReLU, respectively.

CNN. Therefore, we adopt the Leaky ReLU to enhance generalization performance of our method in the relatively small dataset. An ablation study is performed to demonstrate the effects of Leaky ReLU in comparison with ReLU. Here, the same training data is used to train the proposed network, and the other configuration parameters are also the same.

Figure 4: Denoising one simulated fringe pattern(the 1st row): noise free image, noisy image, WFF, CED, FPD-CNN, respectively. The 2nd row are the corresponding skeletons.

Then, we perform a test on a finer experimentally obtained optical fringe pattern with high density of size to verify the generalization ability of the proposed model. As shown in Fig.3

, one can see that the result obtained with the ReLU is seriously blurred. However, for the result obtained with the Leaky ReLU, not only noise is effectively filtered out, but also the geometry of the fringe patten is well preserved and enhanced. The superiority of the Leaky ReLU activation function can be attributed to its property that it can propagate negative gradients forward, avoiding the problem of so-called “neuronal death”.

3.2 Results on Simulated Images

We compare the proposed FPD-CNN with two kinds of representative methods, including WFF 4 and CED 10 , both of which belong to transform domain and spatial domain , respectively. Ten simulated fringe patterns with the size of are tested, and the peak signal to noise ratio (PSNR), the mean of structural similarity index (SSIM) 27 , the mean absolute error(MAE), and running time(seconds) are calculated to verify the proposed method, respectively. The skeletons are obtained by parallel thinning algorithm referred in 28 . In favor of the principle of minimizing the MAE, we adjust the parameters of both WFF and CED to be optimal. Average quantification results are shown Table.1 with the best in bold. The following analysis illustrates the reasons why the proposed method can achieve significant advantages in quantitative comparisons.

PSNR 15.13 12.22 24.48
SSIM 0.68 0.554 0.97
MAE 30.86 42.94 9.87
Running Time(s) 222.17 2.29 0.57
Table 1: Average Quantification Results on ten Simulated Fringe Patterns

The denoised results of different methods for one fine simulated fringe pattern with greatly variable density are shown in Fig.4. The results obtained with WFF and CED are not satisfactory. As focusing on the processing of finer fringe patterns with high density here, in order not to destroy the signal, WFF adopts a wider frequency band, which also makes partial noise survive in the threshold processing, and consequently reduces the filtering performance. This problem is particularly noticeable when dealing with speckle noise, as the speckle noise is often superimposed on the frequency spectrum of the fringe pattern, and the simply hard threshold processing cannot completely separate them. The results of the WFF processing in Fig.4 reflect the problem just described, not only artifacts and noise clearly remain in the denoised images, but also existing blurs in high-density areas. The extracted corresponding skeleton is also broken in the corresponding region. For CED, its performance is closely related to the accurate estimation of fringe orientation. When the frequency increases gradually, CED can still produce very good results as long as the noise does not distort the structure of fringe pattern, as shown in the 4th column of Fig.4. On the contrary, when the noise has destroyed the structure of the finer regions with high density, the inaccurate calculation of fringe orientation in CED leads to damaged results with details of severe blurring. Consequently, the corresponding skeleton obtained by CED are either broken severely or have branches. Finally, through a large amount of deep learning of the proposed network, an accurate nonlinear mapping from noisy data to clean one is approximated by FPD-CNN, which achieves the best results among three compared methods: noise is efficiently filtered off while preserving main features of fringe well, the corresponding skeleton is also quite accurate, although there is a little branch.

3.3 Results on Experimental Optical Fringe Patterns

In Fig.5 and Fig.6 two experimentally obtained digital interferograms are shown, including a high density one and a general density one with sizes of and , respectively. When a clean reference is missing, visual inspection is another way to qualitatively evaluate the performance of different methods. In Fig.5, the proposed FPD-CNN and WFF have achieved better filtering results preserving fringe structures. Furthermore, the FPD-CNN is

Figure 5: Denoising a experimentally obtained optical fringe pattern with high density (the 1st row goes from left to right): noisy image, results by WFF, CED, and FPD-CNN, respectively. The 2nd row are the corresponding skeletons.
Figure 6: Denoising a experimentally obtained optical fringe pattern with general density (the 1st row goes from left to right): noisy image, results by WFF, CED, and FPD-CNN, respectively. The 2nd row are the corresponding skeletons.

superior to WFF in preserving image boundary features, and visually WFF produces a slight blurring result. It is also seen from the extracted skeleton that the proposed method is superior to WFF in preserving fine fringe features. Yet CED’s results are not satisfactory and obtaining a serious discontinuity in the skeleton. In Fig.6 nice filtering results are obtained by three methods for the real digital interferograms with general density, through the boundary geometry of the fringes processed by WFF and CED is obviously not as clear as that of the proposed FPD-CNN. The skeleton corresponding to WFF has a slight loss at the boundary, and in the center of skeleton corresponding to the CED produces some little branches that should not be there. The edge positions of the skeletons extracted from the filtered images of CED also shift.

4 Conclusion

We have proposed a new optical fringe pattern denoising method based on multi-stage convolutional neural network. Residual learning is used to directly separate noise from noisy image. The FPD-CNN consists of multiple stages with joint training in an end-to-end fashion for improved removal of fringe noise. As an attempt in optical fringe pattern filtering using CNN, our method performs significantly better in terms of preserving main features of fringe with smaller quantitative errors at a fairly fast speed, in experiments on simulated and experimentally obtained optical fringe patterns. Further optimization of the network architecture and the augmentation of corresponding training data are future work.

5 Acknowledgment

The research has been supported in part by the National Natural Science Foundation of China (61272239, 61671276), the Science and Technology Development Project of Shandong Province of China (2014GGX101024)


  • (1) V. Bavigadda, R. Jallapuram, E. Mihaylova, and V. Toal, Electronic speckle-pattern interferometer using holographic optical elements for vibration measurements, Opt. Lett. 35 (11) (2010), 3273–3275.
  • (2) D. W. Robinson and G. T. Reid, Interferogram Analysis–Digital Fringe Pattern Measurement Techniques, Philadelphia, PA: Inst. Phys. (1993).
  • (3) A. Shulev, I. Russev, and V. C. Sainov, New automatic FFT filtration method for phase maps and its application in speckle interferometry, Proc. SPIE. 4933 (2003), 323–327.
  • (4) K. Qian, Windowed fourier transform for fringe pattern analysis, Appl. Opt. 43 (13) (2004) 2695–2702.
  • (5) K. Qian, Windowed fringe pattern analysis, Bellingham, Wash, USA: SPIE Press. (2013).
  • (6) A. Federico and G. H. Kaufmann, Comparative study of wavelet thresholding methods for denoising electronic speckle pattern interferometry fringes, Opt. Eng. 40 (11) (2001) 2598–2605.
  • (7)

    Q. Sun and S. Fu, Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing, Appl. Opt. 56 (27) (2017) 7708–7717.

  • (8) Q. Yu, X. Liu, and K. Andresen, New spin filters for interferometric fringe patterns and grating patterns, Appl. Opt. 33 (17) (1994) 3705–3711.
  • (9) C. Tang, H. Lin, H. Ren, D. Zhou, Y. Chang, X. Wang, and X. Cui, Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes, Opt. Lett. 33 (19) (2008) 2179–2181.
  • (10) H. Wang, K. Qian, W. Gao, H. S. Seah, and F. Lin, Fringe pattern denoising using coherence-enhancing diffusion, Opt. Lett. 34 (8) (2009) 1141–1143.
  • (11) C. Tang, T. Gao, S. Yan, L. Wang, and J. Wu, The oriented spatial filter masks for electronic speckle pattern interferometry phase patterns, Opt. Express 18 (9) (2010) 8942–8947.
  • (12) S. Fu and C. Zhang, Fringe pattern denoising using averaging based on nonlocal self-similarity, Opt. Commun. 285 (10–11) (2012) 2541–2544.
  • (13) S. Fu and C. Zhang, Fringe pattern denoising via image decomposition, Opt. Lett. 37 (3) (2012) 422–424.
  • (14) W. Xu, C. Tang, Y. Su, B. Li, and Z. Lei, Image decomposition model Shearlet-Hilbert-L with better performance for denoising in ESPI fringe patterns, Appl. Opt. 57 (4) (2018) 861–871.
  • (15) J. Xie and L. Xu and E. Chen, Image Denoising and Inpainting with Deep Neural Networks, in Proc. Advances in Neural Inform. Processing Syst., Lake Tahoe, NV, (2012) 350–358.
  • (16) Y. Chen and T. Pock, Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration, IEEE Trans. Pattern Anal. Mach. Intell. 39 (6) (2017) 1256–1272.
  • (17) K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising, IEEE Trans. Image Process. 26 (7) (2017) 3142–3155.
  • (18) X. Fu, J. Huang, Y. Liao, X. Ding, and J. Paisley, Clearing the skies: a deep network architecture for single-image rain streaks removal, IEEE Trans. Image Process. 26 (6) (2017) 2944–2956.
  • (19) P.  Wang, H.  Zhang and V. M.  Patel, SAR Image Despeckling Using a Convolutional Neural Network, IEEE Signal Process. Lett. 24 (12) (2017) 1763–1767.
  • (20) C. Cruz and A. Foi and V. Katkovnik and K. Egiazarian, Nonlocality-Reinforced Convolutional Neural Networks for Image Denoising, IEEE Signal Process. Lett. 25 (8) (2018) 1216–1220.
  • (21) C. Dong, C. L. Chen, K. He, and X. Tang, Learning a deep convolutional network for image super-resolution, in Proc. Eur. Conf. Comput. Vis. Springer, (2014) 184–199.
  • (22)

    S. Ioffe and C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, in Proc. 32nd Int. Conf. Machine Learn. (ICML-15) (2015) 448–456.

  • (23) B. Xu, N. Wang, T. Chen, and M. Li, Empirical evaluation of rectified activations in convolutional network, in Int. Conf. Machine Learn. Workshops, Deep Learn. (2015) 1–5.
  • (24)

    K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in IEEE Conf. Comput. Vis. Pattern Recognit. (2016) 770–778.

  • (25) A. Levin and B. Nadler, Natural image denoising: Optimality and inherent bounds, in IEEE Conf. Comput. Vis. Pattern Recognit. (2011) 2833–2840.
  • (26) D. P. Kingma and J. Ba, Adam: a method for stochastic optimization, in Proc. Int. Conf. Learn. Represent. (2015).
  • (27) Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Trans. Image Process. 13 (4) (2004) 600–612.
  • (28) L. Lam, S. W. Lee, and C. Suen, Thinning methodologies-acomprehensive survey, IEEE Trans. Pattern Anal. Mach. Intell. 14 (9) (1992) 869–885.