References
 [1] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178–1181 (1991).
 [2] J. M. Schmitt, “Optical coherence tomography (OCT): a review,” IEEE J. Sel. Top. Quant. 5, 1205–1215 (1999).
 [3] J. Welzel, “Optical coherence tomography in dermatology: a review,” Skin Res. Technol. 7, 1–9 (2001).
 [4] M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express 3, 572–589 (2012).
 [5] J. M. Schmitt, S. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999).
 [6] M. Wojtkowski, V. Srinivasan, T. Ko, J. Fujimoto, A. Kowalczyk, and J. Duker, “Ultrahighresolution, highspeed, fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12, 2404–2422 (2004).
 [7] A. Fercher, C. Hitzenberger, G. Kamp, and S. ElZaiat, “Measurement of intraocular distances by backscattering spectral interferometry,” Opt. Commun. 117, 43–48 (1995).
 [8] M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F. Fercher, “In vivo human retinal imaging by fourier domain optical coherence tomography,” J. Biomed. Opt. 7, 457–463 (2002).
 [9] J. Schmitt and A. Knüttel, “Model of optical coherence tomography of heterogeneous tissue,” J. Opt. Soc. Am. A 14, 1231–1242 (1997).
 [10] M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000).
 [11] B. Karamata, K. Hassler, M. Laubscher, and T. Lasser, “Speckle statistics in optical coherence tomography,” J. Opt. Soc. Am. A 22, 593–596 (2005).
 [12] A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007).
 [13] Y. Yue, M. M. Croitoru, A. Bidani, J. B. Zwischenberger, and J. W. Clark Jr, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE T. Med. Imaging 25, 297–311 (2006).
 [14] F. Zhang, Y. M. Yoo, L. M. Koh, and Y. Kim, “Nonlinear diffusion in laplacian pyramid domain for ultrasonic speckle reduction,” IEEE T. Med. Imaging 26, 200–211 (2007).
 [15] M. Gargesha, M. W. Jenkins, A. M. Rollins, and D. L. Wilson, “Denoising and 4D visualization of oct images,” Opt. Express 16, 12313–12333 (2008).
 [16] D. L. Marks, T. S. Ralston, and S. A. Boppart, “Speckle reduction by Idivergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A 22, 2366–2371 (2005).

[17]
A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express
18, 8338–8352 (2010).  [18] A. Cameron, D. Lui, A. Boroomand, J. Glaister, A. Wong, and K. Bizheva, “Stochastic speckle noise compensation in optical coherence tomography using nonstationary splinebased speckle noise modelling,” Biomed. Opt. Express 4, 1769–1785 (2013).
 [19] J. Xie, Y. Jiang, H.T. Tsui, and P.A. Heng, “Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction,” IEEE T. Biomed. Eng. 53, 2300–2309 (2006).
 [20] L. Fang, S. Li, Q. Nie, J. A. Izatt, C. A. Toth, and S. Farsiu, “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express 3, 927–942 (2012).
 [21] N. Iftimia, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography by path length encoded angular compounding,” J. Biomed. Opt. 8, 260–263 (2003).
 [22] L. Ramrath, G. Moreno, H. Mueller, T. Bonin, G. Huettmann, and A. Schweikard, “Towards multidirectional oct for speckle noise reduction,” in Medical Image Computing and ComputerAssisted Intervention, (Springer, 2008), pp. 815–823.
 [23] M. Hughes, M. Spring, and A. Podoleanu, “Speckle noise reduction in optical coherence tomography of paint layers,” Appl. Optics 49, 99–107 (2010).
 [24] A. Desjardins, B. Vakoc, G. Tearney, and B. Bouma, “Speckle reduction in OCT using massivelyparallel detection and frequencydomain ranging,” Opt. Express 14, 4736–4745 (2006).
 [25] A. Desjardins, B. Vakoc, W. Oh, S. Motaghiannezam, G. Tearney, and B. Bouma, “Angleresolved optical coherence tomography with sequential angular selectivity for speckle reduction,” Opt. Express 15, 6200–6209 (2007).
 [26] M. Pircher, E. Go, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8, 565–569 (2003).
 [27] Z. Jian, L. Yu, B. Rao, B. J. Tromberg, and Z. Chen, “Threedimensional speckle suppression in optical coherence tomography based on the curvelet transform,” Opt. Express 18, 1024–1032 (2010).
 [28] E. Candès and B. Recht, “Exact matrix completion via convex optimization,” Commun. ACM 55, 111–119 (2012).
 [29] Y. Deng, Y. Liu, Q. Dai, Z. Zhang, and Y. Wang, “Noisy depth maps fusion for multiview stereo via matrix completion,” IEEE J. Sel. Top. Sign. 6, 566–582 (2012).

[30]
H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using low
rank matrix completion,” in
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
(IEEE, 2010), pp. 1791–1798.  [31] J. Suo, Y. Deng, L. Bian, and Q. Dai, “Joint nongaussian denoising and superresolving of raw high frame rate videos,” IEEE T. Image Process. 23, 1154–1168 (2014).
 [32] D. P. Bertsekas, Constrained optimization and Lagrange multiplier methods (Academic Press, 1982).
 [33] Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices,” arXiv preprint arXiv:1009.5055 (2010).
 [34] A. Leigh, A. Wong, D. A. Clausi, and P. Fieguth, “Comprehensive analysis on the effects of noise estimation strategies on image noise artifact suppression performance,” in IEEE International Symposium on Multimedia (ISM), (IEEE, 2011), pp. 97–104.
 [35] T. S. Cho, N. Joshi, C. L. Zitnick, S. B. Kang, R. Szeliski, and W. T. Freeman, “A contentaware image prior,” in IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2010), pp. 169–176.
 [36] T. S. Cho, C. L. Zitnick, N. Joshi, S. B. Kang, R. Szeliski, and W. T. Freeman, “Image restoration by matching gradient distributions,” IEEE T. Pattern Anal. 34, 683–694 (2012).
 [37] F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “Highquality computational imaging through simple lenses,” ACM T. Graphic 32, 149:1–149:14 (2013).

[38]
Y. Deng, Q. Dai, and Z. Zhang, “An overview of computational sparse models and their applications in artificial intelligence,” in
Artif. Intell. Evol. Comput. and Metaheuristics, (Springer Berlin Heidelberg, 2013), pp. 345–369.  [39] H. Lee, A. Battle, R. Raina, and A. Ng, “Efficient sparse coding algorithms,” in Advances in Neural Information Processing Systems, (NIPS, 2006), pp. 801–808.
 [40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE T. Image Process. 13, 600–612 (2004).
 [41] Y. Yu and S. Acton, “Speckle reducing anisotropic diffusion,” IEEE T. Image Process. 11, 1260–1270 (2002).
 [42] Y. Yue, M. Croitoru, A. Bidani, J. Zwischenberger, and J. W. Clark, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE T. Med. Imaging 25, 297–311 (2006).
 [43] M. A. Mayer, A. Borsdorf, M. Wagner, J. Hornegger, C. Y. Mardin, and R. P. Tornow, “Image denoising algorithms archive,” http://www5.cs.fau.de/en/research/software/idaa/ (2012).
 [44] R. Bernardes, C. Maduro, P. Serranho, A. Araújo, S. Barbeiro, and J. CunhaVaz, “Improved adaptive complex diffusion despeckling filter,” Opt. Express 18, 24048–24059 (2010).
 [45] L. Fang, S. Li, R. McNabb, Q. Nie, A. Kuo, C. Toth, J. Izatt, and S. Farsiu, “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE T. Med. Imaging 32, 2034–2049 (2013).
1 Introduction
Optical coherence tomography (OCT), which dates back to 1991 [1], provides crosssectional views of the subsurface microstructure of biological tissues [2, 3], and has become a widely used diagnostic technique in the medical field due to its noninvasive nature. For example, ophthalmology has particularly benefited from OCT, as it can image the retina and aid in the diagnosis [4]. Different from common CCD imaging, OCT is an interferometric technique, and typically uses the nearinfrared laser light to penetrate into the scattering medium before capturing the backscattered optical waves for the final imaging [5]. With the development of ultrahigh resolution OCT (UHROCT) [6] and Fourier domain OCT (FDOCT) [7, 8], it is feasible to visualize biological tissues at a cellular level, and up to the depth of nearly 1mm below the surface with high sensitivity and image quality. Furthermore, the image acquisition speed of OCT systems has greatly improved with the development of high speed sensors and tunable lasers with MHz scanning rate, which allows realtime imaging of vivo tissues [8].
One of the important challenges limiting highspeed OCT’s development is its unsatisfying image quality caused by speckle noise. Due to the coherence of optical waves, speckle noise arises under limited spatialfrequency bandwidth of the interference signals [5]. The generation mechanism of OCT determines that the properties of speckle noise are related not only to the laser source but also to the tissue’s structural properties [9, 10, 11], and thus results in nonuniform speckle noise over the entire image. Due to the significance of high precision in medical diagnosis, it is vital to remove speckle noise from OCT images for image quality enhancement.
Large efforts have been taken for denoising OCT images, and various approaches are reported. These methods mainly fall into two categories, namely singleframe methods and multiframe methods. Singleframe methods often assume a prior model (either parametric or nonparametric) for the latent signal and noise, and then remove the noise determinatively or probabilistically from the input single image. Filtering is a widely used strategy and there are a bunch of OCT denoising filters. For example, Ozcan et al. [12] apply various digital filters for denoising OCT images, and the results indicate that the nonorthogonal wavelet filter together with the enhanced Lee and the adaptive Wiener filter can significantly reduce speckle noise. Based on the wavelet filter, Yue et al. [13] utilize the iterative edge enhancement feature of nonlinear diffusion to improve the denoising results. Similarly, [14] uses nonlinear diffusion in the Laplacian pyramid domain to filter ultrasonic images. The Kovesi Nw filtering technique and the Laplacian pyramid nonlinear diffusion (LPND) technique are unified together in [15]
to remove both shot noise and speckle noise from OCT images. Besides the filtering techniques, there are also some other approaches such as regularization and Bayesian inference. The work in
[16] uses a regularization method to minimize the Csiszar’s Idivergence measurement, which would extrapolate additional details from the input noisy images to improve the visual effects. Wong et al. [17] and Cameron et al. [18] use a statistic Bayesian least square model to reduce OCT speckle noise in logarithmic space. Xie et al. [19] take image contents into consideration and propose a salient structure extraction algorithm combining an adaptive speckle suppression term, to enhance ultrasound images. Different from the above filtering or statistic methods, based on sparse coding, the work in [20] learns an over complete dictionary from high signaltonoiseratio (SNR) images, and then utilizes this dictionary to reconstruct lowSNR OCT images and achieves significant noise suppression. In all, making use of the intrinsic redundancy of a single OCT frame can help noise removal to a large extent.Benefiting from the development of high speed OCT imaging systems, the correlation between adjacent frames increases, and researchers gradually begin to use multiple OCT frames to attenuate speckle noise and propose various multiframe methods. Technically these methods could be further classified into hardware methods and algorithmic methods. The general idea of hardware methods is to change the parameters of OCT imaging systems to decorrelate speckle noise in different frames, and then directly average these frames after image registration to get a noisefree OCT image. For example,
[10] and [21, 22, 23] alter the angle of incident light during capturing different frames, [24, 25] change the detection angle of backreflected light, and [26] changes the frequency of the laser beam. The main disadvantage of these hardware methods is the complex procedure of data acquisition, which would greatly increase the design complexity of OCT imaging systems [18]. There are several recentlyreported algorithmic methods for denoising multiple OCT images, making use of the redundancy of latent sharp OCT images in the frequency domain. For example, the work in [27] performs the 3D curvelet transform to the volume data, then thresholds the coefficients, and finally does the inverse 3D curvelet transform to realize noise removal. Under the same framework, Mayer et al. [4] choose the wavelet domain for coefficient thresholding. In spite of the promising performance, these denoising algorithms run the risk of losing crucial details by directly truncating the coefficients.Making use of both the intraframe and interframe redundancy of OCT volume data, this paper proposes an algorithmic multiframe optimization method to denoise OCT images. Within each single frame, an OCT image is statistically similar to a natural image and its pixel gradient map tend to be sparse. This serves as the intraframe prior. To avoid piecewise constant artifacts by simply using the total variation constraint, and considering the excellent detailpreservation performance of the low rank prior in matrix completion [28, 29] and image reconstruction [30, 31], we make use of the interframe redundancy by registering the OCT images along the image count dimension to form a lowrank volume. Subjecting to both the image formation model and the nonparametric bound constraint of nonuniform speckle noise, we build a preliminary nonconvex optimization model which jointly minimizes the rank of temporally registered OCT volume and forces the sparsity of its spatial gradient. To solve the above model, we first perform some mathematical transformations and approximations for convexification. Then considering the superior convergence property of the augmented Lagrange multiplier (ALM) method [32] for solving constrained optimization problems, as utilized in [33], we derive a numeric algorithm based on the ALM method to solve the convexified optimization model. Experiments on both pigeye and human retina OCT data show that our denoising technique could effectively reduce speckle noise while preserving crucial details, and exhibits superior performance compared to the other popular methods.
The remainder of this paper is organized as follows: Sec. 2 sequentially describes the preprocessing operations—frame registration, noise variance estimation, model construction, and algorithm derivation. Then in Sec. 3, we apply our method to realcaptured OCT images including pigeye and human retinal data, and compare our approach with several other popular methods in terms of both visual quality and quantitative evaluation. Finally, we conclude this paper with some conclusions and discussions in Sec. 4.
2 Method
In this section, we explain the whole operation framework of our approach as diagramed in Fig. 1. Our method mainly includes three steps: (1) preprocessing step including pixel logarithm, frame registration and noise variance estimation, (2) modeling step, and (3) solving step based on the ALM method. Implementation of each step is detailed in the following subsections.
2.1 Preprocessing
In this subsection, we conduct three preprocessing operations to the captured OCT frames, including pixel logarithm, frame registration and noise variance estimation. Due to the scattering of the laser source, the speckle noise in OCT images is multiplicative [9, 10], and can be described as
(1) 
where denotes the captured data at location , while and respectively denote the ground truth data and measurement noise at the same location. To convert the correlation between and from multiplication to addition, we take logarithm to both sides of Eq. (1) and get
(2) 
In the following, we assume that all the variables have been logarithmically transformed.
The second preprocessing operation is frame registration. Although the optical coherence tomography facilities are usually of high capturing speed, there always tends to be slight misalignment among OCT image sequences in vivo imaging, due to the object motion and other system factors [4]. Here we adopt the registration method in [4], where a powell optimizer is utilized for minimizing the sum of squared distances (SSD) among multiple registered images. Specifically, the approach applies translations and rotations to warp the pixels describing the same tissue position in different frames to the same image coordinate.
The third requisite preprocessing operation is the speckle noise variance estimation. Considering the satisfying performance of the median absolute deviation (MAD) method for noise estimation [34], we here utilize the MAD method as described in [18]. Due to the similar tissue properties and light directions in the same neighborhood, we assume uniform noise variance for each pixel within a small patch. Therefore, the MAD within a small neighborhood of pixel is first computed in logarithmic space as [18]
(3) 
where denotes the median value over ’s neighborhood , and is the th neighboring pixel of . To make the estimated deviation more precise, we choose a larger neighborhood
, and calculate the local standard deviation
of its subneighborhood . Then we regard the mode of these as the preliminary noise deviation at position(4) 
Finally, to force the smoothness of noise variance among adjacent pixels, we conduct a cubic spline fitting process to amend and get the final standard deviation estimation of the OCT noise (corresponding noise variance can be calculated as the square of the estimated standard deviation). Empirical studies in [18] tell that the noise estimation performs best when the pixel numbers of and are and , respectively.
2.2 Modeling
In this subsection, we build our optimization model incorporating both of the interframe and intraframe priors. Suppose that there are frames in the OCT volume, and the pixel number of each frame is . We denote the temporally registered noisy OCT images, their latent sharp version, and the measurement noise as M, L and N, respectively. Mathematically, the image formation equation can be written as
(5) 
By representing each frame as a column vector, the dimensions of
M, L, and N are all . After the frame registration, theoretically the entries in one specific row of L should be exactly the same, as shown in Fig. 1. Therefore, we treat L as a low rank matrix, and use the minimization of its nuclear norm , which calculates the sum of L[33], as the interframe prior constraint.According to the statistical studies [35, 36], the adjacent pixels in natural images own similar intensities. Thus the image gradient centers around zero and follows a heavy tailed distribution, i.e., the gradient of natural images is sparse. Although captured via a different imaging mechanism from usual CCD imaging methods, the OCT images still follow similar statistics, and we impose the gradient sparsity of the latent OCT images as the intraframe prior. Specifically, the norm which counts the number of nonzero entries in a matrix can model the sparseness quite well, and thus we can minimize as the intraframe constraint. Adopting the same representation in [37], here we use matrix multiplication for gradient calculation, namely , where and are respectively the horizontal and vertical gradient operators, and are defined as the diagonal matrices of corresponding high pass filters and .
As mentioned before, speckle noise includes both the image information and the zeromean noise. Since what we concern most is the removal of the latter component, we treat the first component as a part of the latent sharp image, and concentrate on attenuating the zeromean noise. According to the three sigma rule, which indicates that nearly all (99.73%) of the instances of a random variable lie within 3 times standard deviation from its mean, we can approximatively formulate the noise constraint as
(6) 
where is the standard deviation matrix whose dimension is . By introducing a nonnegative matrix variable , we can transform the above inequality into an equality as
(7) 
in which is the entrywise product, i.e., for two matrices and Y, .
Based on the above notations, the optimization model for denoising is defined as
(8)  
with being a positive weighting parameter to balance different objective regulation terms.
2.3 Solving the model
In this subsection, we derive our optimization algorithm based on the ALM method to solve the above model in (8). The model is obviously nonconvex, so we first conduct several convexification transformations to the model. As shown in [38, 39], replacing the norm with the norm is one typical convexification transformation. Here we replace with , where denotes the sum of the matrix entries’ absolute values. Further, it is hard to directly utilize the ALM method to solve the convexified objective function due to its high nonlinearity. To address this problem, we replace the variables whose nuclear norm or norm needs minimization with two introduced auxiliary variables and . Besides, we pack as for computation simplicity, where . By the above substitutions, we can rewrite the model as
(9)  
Here are supposed to be 0 in theory.
As stated before, we utilize the ALM method to solve the above model. This method adopts an iterative optimization strategy, and successively updates every variable within each iteration. In the following, we derive the updating rules for each variable. First, the augmented Lagrangian function of Eq. (9) is
(10) 
where denotes the inner product, defines the Lagrangian multipliers (in matrix form), and refers to the Frobenius norm that calculates the root of all the square entries’ sum in a matrix. Here is a penalty parameter balancing the four equation constraints in Eq. (9), and follows the standard ALM updating rule as , where and are both userdefined parameters, and indexes the iteration. The updating rules of the other variables including and Y are derived as follows.
Optimize S. By removing all the items irrelevant to in , we can get
According to the ALM algorithm, we can get the updating rule of as
(11) 
where is the SVD of , and
Similarly, keeping only the items related to in yields
and we can get the updating rule of as
(12) 
Optimize L and N. By keeping only the items related to L, is simplified as
and the partial derivative of with respect to L is
Similarly, the partial derivative of with respect to N is
Obviously it is hard to get the closedform solution to either or , so we resort to the gradient descent method to approximatively update these two variables as
(13)  
(14) 
Here is the learning rate.
Optimize . The derivative of with respect to is
Under the nonnegative assumption on , we can get its updating rule as
(15) 
All the algorithm parameters are set as follows: =0.2, , , , and . These constant parameters are set empirically by testing the algorithm on a series of OCT data to obtain the best denoising performance and least running time. Besides, all of the parameters are fixed across all the experiments in this paper. We take the averaged preregistered frames as the initialization of the denoised frame. For more clarity, the entire iterative algorithm based on the above derivations is summarized in Alg. 1.
2.4 Evaluation criterion
To quantitatively evaluate the denoising performance of different approaches, we utilize three widely used image quality criteria, including the peak signaltonoise ratio (PSNR), the structure similarity (SSIM) [40] and the figure of merit (FOM) [41, 42]
as evaluation metrics for the final denosing results.
Psnr.
In digital image recovery, PSNR has traditionally been widely used to assess the image quality of processed image , with respect to its ground truth . PSNR is calculated as
(16) 
where is the maximum intensity of bit images. For example, for the widely used 8 bit images, . From the equation we can see that PSNR intuitively describes the intensity difference between two images, and would be smaller for low quality recovered images. Empirically, typical PSNR for visually promising images is roughly between 25 dB and 40 dB.
Ssim.
The structure similarity criterion is proposed in [40] to measure the structural similarities between two images. This criterion first selects two corresponding patch sets and from L and I respectively, with being the patch number, and then calculates the preliminary SSIM between each patch pair and as
(17) 
where and are respectively the average pixel intensities of patch and . and are the patchs’ standard variances, while is the covariance between and . Besides, are two constants, with and being two usedefined parameters whose default values are respectively 0.01 and 0.03. The final SSIM score between two images is the average of all the patches’ preliminary SSIM scores. The SSIM score ranges from 0 to 1, and is higher when two images own more similar structural information. Compared to traditional metrics such as PSNR, which only reveals the intensity differences between two images, SSIM reflects the similarity in structural information of an image pair, and thus is closer to human perception.
Edge preservation.
Further processing of denoised OCT images would likely involve the segmentation of layers or identification of a particular image feature. Thus the preservation of edges in denoised OCT images is very important. Here we also adopt the figure of merit (FOM) [41, 42] to evaluate the edge preservation abilities of various denoising methods. FOM is defined as
(18) 
where and are respectively the numbers of detected edge pixels in the reconstructed image and the groundtruth image, is the Euclidean distance between the th detected reconstructed edge pixel and its nearest groundtruth edge pixel, and is a constant parameter typically set to be . In this paper we use the Canny edge detector under default parameter settings in Matlab. FOM score ranges from 0 to 1, and is higher when the reconstructed image owns clearer edges and thus is more similar to the groundtruth image.
3 Experimental results
In this section, we test our denoising approach on two public OCT image sets, and compare our denosing results with those of several previously reported popular methods. Besides the visual comparison, we also perform quantitative comparison in terms of PSNR, SSIM, FOM and running time.
3.1 Results on the pigeye data and performance comparison
In this experiment, we use the public pigeye OCT dataset in [43], which is also used as the source data in [4]. The dataset is acquired using a Spectralis HRA OCT (Heidelberg Engineering) to scan a pig eye in the high speed mode with 768 Ascans. There are totally 455 images (each frame contains 768496 pixels) in the dataset including 35 sets, and 13 frames are included in each set sharing the same imaging position. All the 35 positions correspond to a complete 0.384mm shift in the transversal direction. For each frame, the pixel spacing is 3.87m in the axial direction, and 14m in the transversal direction. To assess the quality of the recovered images, we need a noisefree benchmark image for reference. However, due to the high imaging speed, the captured images’ SNR is very low. Therefore, we utilize the same technique in [4], in which the averaged image of all the 455 preregistered frames is used as the latent noise free image.
Since the proposed method is multiframe based, we first investigate the effect of the most important parameter—the number of input frames—on our algorithm. Fixing all the other parameters, we run a Matlab implementation of our proposed method on an Intel E7500 2.93 GHz CPU computer with 4GB RAM and 64 bit Windows 7 system, and compare the performance of different numbers of input frames, ranging from 2 to 13. From the result in Fig. 2, we can see that our algorithm still works using only 2 frames, and the reconstruction quality gradually improves using increasing number of frames from 2 to 8. This trend is much less obvious with more than 8 frames. Based on this observation, we use 8 input frames in the following experiments to compare our approach with other methods.
Next, we run our algorithm and the other four popular denoising methods on the OCT dataset for comparison. The four methods include the complex diffusion method [44], the Bayesian method [17], NSC [18] and the multiframe wavelet OCT denoising method [4]. To validate the superior effectiveness of our approach, we compare all the algorithms’ performance visually and quantitatively. What should be noticed is that the complex diffusion method, the Bayesian method and NSC are all singleframe methods, so we take the average image of the registered input frames as their singleframe input. Besides, the first two methods assume spatially invariant noise parameter (standard deviation). Correspondingly, we use the maximum in the estimated deviation matrix as the input standard deviation of noise. By random selection, the serial numbers of the input 8 sequential frames are from ’’ to ’’.
The recovered images are shown in Fig. 3, where only the first frame is presented for each method. We can see that the recovered images of the anisotropic diffusion method, the Bayesian method and NSC still contain undesired noise, which largely degenerates the image quality and makes these three methods less competitive to the other two techniques. On the whole, the multiframe wavelet method and our method are both superior to the three singleframe approaches. Comparing the results of these two multiframe methods, we can see that in the smooth regions, the wavelet method leaves out more noise than our method. In the textured regions, the result of our method maintains higher color contrast which would improve the visual effects. Stronger comparison is presented in the closeups. For example, in the greenrectanglehighlighted region, the wavelet method nearly blurs out the details of the white spot on the left side, while our method still contains grey value changes which would provide important information for diagnosis.
Metric  Input  Average  Diffusion  Bayesian  NSC*  Wavelet  Ours  

Entire  PSNR(dB)  17.19  24.56  29.14  28.38  29.82  30.75  31.74 
Image  SSIM  0.12  0.45  0.73  0.70  0.81  0.86  0.91 
Running time(s)  —  —  79  33  2  60  36  
Red  PSNR(dB)  15.03  22.02  26.60  26.07  27.47  27.85  28.92 
Clip  SSIM  0.06  0.29  0.65  0.63  0.71  0.73  0.81 
FOM  0.43  0.46  0.51  0.57  0.60  0.61  0.63  
Green  PSNR(dB)  15.13  21.91  26.14  25.35  26.83  27.83  28.75 
Clip  SSIM  0.06  0.25  0.60  0.57  0.66  0.72  0.80 
FOM  0.48  0.49  0.51  0.53  0.58  0.58  0.58 

The performance of NSC is tested by its proposers on the AMD Athlon X3 II CPU with 8GB of RAM and Windows 7 64bit system, using Matlab and C++ programming for high computation efficiency.
Numerical assessments are shown in Tab. 1. For the entire image, we can see that our method could raise the noisy images’s PSNR from 17.19dB to 31.74dB, and SSIM from 0.13 to 0.91. In terms of all the three evaluation criteria including PSNR, SSIM and FOM, our method consistently owns advantages over the other methods. Comparing the two multiframe methods, namely the multiframe wavelet method and our method, we can see that our approach is superior in PSNR and SSIM by around 1dB and 0.1, respectively. Our superior performance is mainly attributed to two factors: the combinational constraints from both the interframe and intraframe priors, and the good convergence of the derived algorithm. Comparing the numerical evaluation results of the two selected regions of interests, we can also see clearer advantages of our method over the other methods. What’s more, the running time comparison is also provided in Tab. 1 (the noise estimation time is also included for all the four algorithms). We run the Matlab codes of our algorithm and the other three methods except for NSC on our computer, while the running time of NSC is provided by its proposers who run their Matlab C++ implementation on a different platform. We can see that our approach needs around 36s to process one frame, and is of similar efficiency to the Bayesian method which is the fastest algorithm among the four popular methods except for NSC.
3.2 Analyzing OCT images of human subjects
To further test the practical denoising effectiveness of our method, we conduct a denoising experiment on human retinal OCT images. We use the same public dataset as that used in [45], which is acquired by a SDOCT imaging system from Bioptigen Inc. with 4.5m axial resolution, 500 Ascans per Bscan and 5 azimuthally repeated Bscans in each volume. Similar to the processing progress described in Sec. 2.1, we firstly register the OCT frames and then use different methods to denoise these frames. Considering that the anisotropic diffusion method, the Bayesian method and NSC leave too much noise on the recovered images, and thus own little competitiveness compared to the other two multiframe methods, here we only present the denoising results of the multiframe wavelet method and our method for clearer comparison.
The results are shown in Fig. 4, which exhibits similar performance ranking to that of the pigeye data. Comparing the denoising results produced by the multiframe wavelet method and our proposed method carefully, we can see that the result of the wavelet method contains undesired edge burrs, while our result presents clearer layer boundaries (such as the horizontal layer edges in the two selected regions), which would help a lot in followup analysis of the denoised images, such as OCT layer segmentation and diagnosis.
4 Conclusions and discussions
4.1 Summary and conclusions
In this study, we propose a multiframe OCT denoising method utilizing constraints from both interframe and intraframe priors. Specifically, the interframe prior refers to the low rank of registered OCT frames, and the intraframe prior is the sparsity of image gradient. Benefited from the proper convexification transformations and usage of ALM, the derived algorithm converges well on different data. Besides, by incorporating a nonparametric and nonuniform noise description, our approach is applicable for different noise models.
On the adopted benchmark data, our approach could improve OCT image’s quality by raising the image’s PSNR from 17.19dB to 31.74dB and SSIM from 0.12 to 0.91, in around 36s for each frame. The comparisons with the other four popular methods on both pigeye data and human retinal data reveal that our method owns advantages mainly in two aspects: (1) being able to attenuate speckle noise effectively and preserve crucial image details; (2) with efficiency comparable to the reported fastest approaches. Such high performance of the proposed method mainly benefits from the combinational prior modeling and effective optimization algorithm.
4.2 Limitations and future extensions
The performance of our method depends on the registration accuracy, due to that the low rank prior in the objective function does not hold for an unaligned frame stack. This is also a challenge for other multiframe denoising methods, and need to be addressed by the progress of noise robust matching techniques. Besides, a larger frame number is favourable to take advantage of the low rank prior. Therefore, one need to set the system’s frame rate to balance the noise level and the number of available frames in practical applications.
In addition, the widely used anisotropic total variation is utilized as the intraframe prior, which penalizes the diagonal gradients more significantly than the horizontal and vertical ones. This means that the utilized nonuniform constraint on image intensity changes along different directions, which may introduce undesired artifacts in the denoised images. Thus exploring an isotropic intraframe prior would be one of our future extensions.