A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head

09/27/2018
by   Sripad Krishna Devalla, et al.
2

Purpose: To develop a deep learning approach to de-noise optical coherence tomography (OCT) B-scans of the optic nerve head (ONH). Methods: Volume scans consisting of 97 horizontal B-scans were acquired through the center of the ONH using a commercial OCT device (Spectralis) for both eyes of 20 subjects. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. A custom deep learning network was then designed and trained with 2,328 "clean B-scans" (multi-frame B-scans), and their corresponding "noisy B-scans" (clean B-scans + gaussian noise) to de-noise the single-frame B-scans. The performance of the de-noising algorithm was assessed qualitatively, and quantitatively on 1,552 B-scans using the signal to noise ratio (SNR), contrast to noise ratio (CNR), and mean structural similarity index metrics (MSSIM). Results: The proposed algorithm successfully denoised unseen single-frame OCT B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean SNR increased from 4.02 ± 0.68 dB (single-frame) to 8.14 ± 1.03 dB (denoised). For all the ONH tissues, the mean CNR increased from 3.50 ± 0.56 (single-frame) to 7.63 ± 1.81 (denoised). The MSSIM increased from 0.13 ± 0.02 (single frame) to 0.65 ± 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Conclusions: Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort.

READ FULL TEXT VIEW PDF

page 4

page 6

page 8

page 9

01/29/2022

ADC-Net: An Open-Source Deep Learning Network for Automated Dispersion Compensation in Optical Coherence Tomography

Chromatic dispersion is a common problem to degrade the system resolutio...
07/09/2021

Retinal OCT Denoising with Pseudo-Multimodal Fusion Network

Optical coherence tomography (OCT) is a prevalent imaging technique for ...
03/08/2017

QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data

Optical coherence tomography (OCT) enables high-resolution and non-invas...
12/17/2020

Fast 3-dimensional estimation of the Foveal Avascular Zone from OCTA

The area of the foveal avascular zone (FAZ) from en face images of optic...
02/19/2015

Application of Independent Component Analysis Techniques in Speckle Noise Reduction of Retinal OCT Images

Optical Coherence Tomography (OCT) is an emerging technique in the field...
10/23/2018

DeepLSR: Deep learning approach for laser speckle reduction

We present a deep learning approach for laser speckle reduction ('DeepLS...
12/06/2013

Multi-frame denoising of high speed optical coherence tomography data using inter-frame and intra-frame priors

Optical coherence tomography (OCT) is an important interferometric diagn...

1 Introduction

In recent years, optical coherence tomography (OCT) imaging has become a well-established clinical tool for assessing optic nerve head (ONH) tissues, and for monitoring many ocular [1, 2] and neuro-ocular pathologies [3]. However, despite several advancements in OCT technology [4], the quality of B-scans is still hampered by speckle noise [5, 6, 7, 8, 9, 10, 11], low signal strength [12], blink [12, 13] and motion artefacts [12, 14].

Specifically, the granular pattern of speckle noise deteriorates the image contrast, making it difficult to resolve small and low-intensity structures (e.g., sub-retinal layers) [5, 6, 7], thus affecting the clinical interpretation of OCT data. Also, poor image contrast can lead to automated segmentation errors [15, 16, 17]

, and incorrect tissue thickness estimation

[18], potentially affecting clinical decisions. For instance, segmentation errors for the retinal nerve fiber layer (RNFL) thickness can lead to over/under estimation of glaucoma [19].

Currently, there exist many hardware [20, 21, 22, 23, 24, 25, 26, 27, 28] and software schemes [28, 29, 30] to de-noise OCT B-scans. Hardware approaches offer robust noise suppression through frequency compounding [25, 26, 27, 28] and multi-frame averaging (spatial compounding) [20, 21, 22, 23, 24]. While multi-frame averaging techniques have shown to enhance image quality and presentation [29, 30], they are sensitive to registration errors [30], and require longer scanning times [31]. Moreover, elderly patients often face discomfort and strain [32], when they remain fixated for long durations [32, 33]. Software techniques, on the other hand, attempt to denoise through numerical algorithms [5, 6, 7, 8, 9, 10, 11] or filtering techniques [34, 35, 36]. However, registration errors [37], computational complexity [5, 38, 39, 40], and sensitivity to choice of parameters [41] limit their usage in the clinic.

In this study, we propose a deep learning approach to denoise OCT B-scans. We aimed to obtain multi-frame quality B-scans (i.e. signal-averaged) from single-frame (without signal averaging) B-scans of the ONH. We hope to offer a denoising framework to obtain superior quality B-scans, with reduced scanning duration and minimal patient discomfort.

2 Methods

2.1 Patient Recruitment

A total of 20 healthy subjects were recruited at the Singapore National Eye Centre. All subjects gave written informed consent. This study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional review board of the hospital. The inclusion criteria for healthy subjects were: an intraocular pressure (IOP) less than 21 mmHg, and healthy optic nerves with a vertical cup-disc ratio (VCDR) less than or equal to 0.5.

2.2 Optical Coherence Tomography Imaging

The subjects were seated and imaged under dark room conditions by a single operator (TAT). A spectral-domain OCT (Spectralis, Heidelberg Engineering, Heidelberg, Germany) was used to image both eyes of each subject. Each OCT volume consisted of 97 horizontal B-scans (m distance between B-scans; 384 A-scans per B-scan), covering a rectangular area of 15°x 10°centered on the ONH. For each eye, single-frame (without signal averaging), and multi-frame (75x signal averaging) volume scans were obtained. Enhanced depth imaging (EDI) [42] and eye tracking [43, 44] modalities were used during the acquisition. From all the subjects, we obtained a total of 3,880 B-scans for each type of scan (single-frame or multi-frame).

2.3 Volume Registration

The multi-frame volumes were reoriented to align with the single-frame volumes through rigid translation/rotation transformations using 3D software (Amira, version 5.6; FEI). This registration was performed using a voxel-based algorithm that maximized mutual information between two volumes [45]. Registration was essential to quantitatively validate the corresponding regions between the denoised and multi-frame B-scans. Note that Spectralis follow-up mode was not used in this study. Although the follow-up mode allows a new scanning of the same area by identifying previous scan locations, in many cases, it can distort B-scans and thus provide unrealistic tissue structures in the new scan.

2.4 Deep Learning Based Denoising

While deep learning has shown promising segmentation [46, 47, 48, 49], classification [50, 51, 52], and denoising [53, 54, 55] applications in the field of medical imaging for modalities such as magnetic resonance imaging (MRI), its application to OCT imaging is still in its infancy [56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67]. Although recent deep learning studies have shown successful segmentation [56, 57, 48, 59, 60, 61, 62, 63, 64] and classification applications [65, 66, 67] in OCT imaging, to the best of our knowledge no study exists yet to assess the success of denoising OCT B-scans. We believe, a denoising framework would not only increase the reliability of clinical information in single-frame B-scans, but also improve the robustness of segmentation and classification tools.

In this study, we developed a fully-convolutional neural network, inspired by our earlier DRUNET architecture

[63] to denoise single-frame OCT B-scans of the ONH. It leverages on the inherent advantages of U-Net [68], residual learning [69], dilated convolutions [70]

, and multi-scale hierarchical feature extraction

[71]

to obtain multi-frame quality B-scans. Briefly, the U-Net and its skip connections helped the network learn both the local (tissue texture) and contextual information (spatial arrangement of tissues). The contextual information was further exploited using dilated convolution filters. Residual connections improved the flow of the gradient information through the network, and multi-scale hierarchical feature extraction helped restore tissue boundaries in the B-scans.


2.5 Network Architecture

The network was composed of a downsampling and an upsampling tower, connected to each other via skip-connections (Figure 1). Each tower consisted of one standard block and two residual blocks. Both the standard and the residual blocks comprised of two dilated convolution layers (64 filters; size = 3x3). A 3x3 convolution layer was used to implement the identity connection in the residual block.

Figure 1: The architecture comprised of two towers: (1) A downsampling tower – to capture the contextual information (i.e., spatial arrangement of the tissues), and (2) an upsampling tower – to capture the local information (i.e., tissue texture). Each tower consisted of two blocks: (1) a standard block, and (2) a residual block. The latent space was implemented as a standard block. The multi-scale hierarchical feature extraction unit helped better recover tissue edges eroded by speckle noise. The network consisted of 900k trainable parameters.

In the downsampling tower, an input B-scan (size: 496x384) was fed to a standard block (dilation rate: 1) followed by two residual blocks (dilation rate: 2 and 4, respectively). A convolution layer (64 filters; size = 3x3; stride = 2) after every block sequentially reduced the dimensionality, enabling the network to understand the contextual information.


The latent space was implemented as a standard block (dilation rate: 1) to transfer the feature maps from the downsampling to the upsampling tower.

The upsampling tower helped the network capture the local information. It consisted of two residual blocks (dilation rate: 4) and a standard block (dilation rate: 1). After every block, a transpose convolution layer (64 filters; size = 3x3; stride = 2) was used to restore the B-scan sequentially to its original dimension.

Multi-scale hierarchical feature extraction [71] helped recover tissue boundaries eroded by speckle noise in the single-frame B-scans. It was implemented by passing the feature maps at each downsampling level through a convolution layer (64 filters; size = 1x1), followed by a transpose convolution layer (64 filters; size = 3x3) to restore the original B-scan resolution. The restored maps were then concatenated with the output feature maps from the upsampling tower.

Finally, the concatenated feature maps were fed to the output convolution layer (1 filter; size = 1x1), followed by pixel-wise hyperbolic tangent () activation to produce a denoised OCT B-scan.

In both towers, all layers except the last output layer, were activated by an exponential linear unit (ELU) [72]

function. In addition, in each residual block, the feature maps were batch normalized

[73] and ELU activated before addition.

The proposed network comprised of 900,000 trainable parameters. The network was trained end-to-end using the Adam optimizer [74]

, and we used the mean absolute error as loss function. We trained and tested the proposed network on an NVIDIA GTX 1080 founders edition GPU with CUDA v8.0 and cuDNN v5.1 acceleration. With the given hardware configuration, each single-frame OCT B-scan was denoised under 20 ms.


2.6 Training and Testing of the Network

From the dataset of 3,880 B-scans, 2,328 of them (from both eyes of 12 subjects) were used as a part of the training dataset. The training set consisted of ‘clean’ B-scans and their corresponding ‘noisy’ versions. The ‘clean’ B-scans were simply the multi-frame (75x signal averaging) B-scans. The ‘noisy’ B-scans were generated by adding Gaussian noise ( and ) to the respective ‘clean’ B-scans (Figure 2).

The testing set consisted of 1,552 single-frame B-scans (from both eyes of 8 subjects) to be denoised. We ensured that the scans from the same subject weren’t used in both training and testing sets.

2.7 Data Augmentation

An exhaustive offline data augmentation was done to circumvent the scarcity of training data. We used elastic deformations [75, 63], rotations (clockwise and anti-clockwise; ), occluding patches [63], and horizontal flipping for both ‘clean’ and ‘noisy’ B-scans. Briefly, elastic deformations were used to produce the combined effects of shearing and stretching in an attempt to make the network invariant to atypical morphologies (as seen in glaucoma [76]). Ten occluding patches of size 60 x 20 pixels were added at random locations to non-linearly reduce (pixel intensities multiplied by a random factor between 0.2 and 0.8) the visibility of the ONH tissues. This was done to make the network invariant to blood vessel shadows that are common in OCT B-scans [77]. Note that a full description of our data augmentation approach can be found in our previous paper [63].

Using data augmentation, we were able to generate a total of 23,280 ‘clean’ and 23,280 corresponding ‘noisy’ B-scans that were added to the training dataset. An example of data augmentation performed on a single ‘clean’ and corresponding ‘noisy’ B-scan is shown in (Figure 2).

Figure 2: An exhaustive offline data augmentation was done to circumvent the scarcity of training data. (A-E) represent the original and the data augmented ‘clean’ B-scans (multi-frame). (F-J) represent the same for the corresponding ‘noisy’ B-scans. The occluding patches (B and G; red boxes) were added to make the network robust in the presence of blood vessel shadows. Elastic deformations (C and H; cyan boxes) were used to make the network invariant to atypical morphologies. A total of 23,280 B-scans of each type (clean/noisy) were generated from 2,328 baseline B-scans.

2.8 Denoising Performance – Qualitative Analysis

All denoised single-frame B-scans were manually reviewed by expert observers (SD & GS) and qualitatively compared against their corresponding multi-frame B-scans.

2.9 Denoising Performance – Quantitative Analysis

The following image quality metrics were used to assess the denoising performance of the proposed algorithm: (1) signal to noise ratio (SNR); (2) contrast to noise ratio (CNR); and (3) mean structural similarity index measure (MSSIM) [78]. These metrics were computed for the single-frame, multi-frame, and denoised OCT B-scans (all from the testing set; 1,552 B-scans of each type).

The SNR (expressed in dB) was a measure of signal strength relative to noise. It was defined as:

where is the ‘clean’(multi-frame) B-scan, and the B-scan to be compared with (either the ‘noisy’ [single frame] or the denoised B-scan). A high SNR value indicates low noise in the given B-scan with respect to the ‘clean’ B-scan.

The CNR was a measure of contrast difference between different tissue layers. It was defined as:

where and

denoted the mean and variance of pixel intensity for a chosen ROI within the tissue ‘i’ in a given B-scan, while

and represented the same for the background ROI. The background ROI was chosen as a 20 x 384 (in pixels) region at the top of the image (within the vitreous). A high CNR value suggested enhanced visibility of the given tissue.

The CNR was computed for the following tissues: (1) RNFL; (2) ganglion cell layer + inner plexiform layer (GCL + IPL); (3) all other retinal layers; (4) retinal pigment epithelium (RPE); (5) peripapillary choroid; (6) peripapillary sclera; and (7) lamina cribrosa (LC). Note that the CNR was computed only in the visible portions of the peripapillary sclera and LC. For each tissue, the CNR was computed as the mean of twenty five ROIs (8x8 pixels each) in a given B-scan.

The structural similarity index measure (SSIM) [78] was computed to assess the changes in tissue structures (i.e., edges) between the single-frame/denoised B-scans and the corresponding multi-frame B-scans (ground-truth). The SSIM was defined between -1 and +1, where -1 represented ‘no similarity’, and +1 ‘perfect similarity’. It was defined as:

where x and y represented the denoised and multi-frame B-scan respectively; ,

denoted the mean intensity and standard deviation of the chosen ROI in B-scan x, while

, represented the same for B-scan y; represented the cross-covariance of the ROIs in B-scans x and y. C1 and C2 (constants to stabilize the division) were chosen as 6.50 and 58.52, as recommended in a previous study [78].

The MSSIM was computed as the mean of SSIM from ROIs (8x8 pixels each) across an B-scan (stride=1; scanned horizontally). It was defined as:

Note that the SNR, and MSSIM were computed for an entire B-scan, as opposed to the CNR that was computed for individual tissues.

3 Results

3.1 Qualitative Analysis

When trained with the ‘clean’ B-scans (multi-frame) and the corresponding ‘noisy’ B-scans, our network was able to successfully denoise unseen single-frame B-scans. The single-frame, denoised and multi-frame B-scan for a healthy subject can be found in (Figure 3). In all the cases, the denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans (Figure 4). Overall, the visibility of all ONH tissues were prominently enhanced (Figure 3; B).

Figure 3: Single-frame (A), denoised (B), and multi-frame (C) B-scans for a healthy subject are shown. The denoised B-scan can be observed to be qualitatively similar to its corresponding multi-frame B-scan. Specifically, the visibility of the retinal layers, and choroid, and lamina cribrosa were prominently improved. Sharp and clear boundaries were also obtained for retinal layers, and the choroid-scleral interface.
Figure 4: Single-frame, denoised and multi-frame B-scans for four healthy subjects (1-4) are shown. In all cases, the denoised B-scans (2nd column) were consistently similar (qualitatively) to their corresponding multi-frame B-scans (3rd column).

3.2 Quantitative Analysis

On average, we observed a two-fold increase in SNR upon denoising. Specifically, the mean SNR for the unseen single-frame/denoised B-scans were: dB / dB respectively, when computed against their respective multi-frame B-scans.

In all cases, the multi-frame B-scans always offered a higher CNR compared to their corresponding single-frame B-scans. Further, the denoised B-scans consistently offered a higher CNR compared to the single-frame B-scans, for all tissues. Specifically, the mean CNR (Table 1) for the for the single-frame/denoised/multi-frame B-scans were: for the RNFL, for the GCL + IPL, for all other retinal layers, for the RPE, for the choroid, for the sclera, and for the LC.

On average, our denoising approach offered a five-fold increase in MSSIM. Specifically, the mean MSSIM for the single-frame/denoised B-scans were: , when computed against their respective multi-frame B-scans.

4 Discussion

In this study, we present a custom deep learning approach to denoise single-frame OCT B-scans of the ONH. When trained with the ‘clean’ (multi-frame) and the corresponding ‘noisy’ B-scans, our network denoised unseen single-frame B-scans. The proposed network leveraged on the inherent advantages of U-Net, residual learning, and dilated convolutions [63]. Further, the multi-scale hierarchical feature extraction [71] pathway helped the network recover ONH tissue boundaries degraded by speckle noise. Having successfully trained, tested and validated our network on 1,552 single-frame OCT B-scans of the ONH, we observed a consistently higher SNR and CNR for all ONH tissues, and a consistent five-fold increase in the MSSIM in all the denoised B-scans. Thus, we may be able to offer a robust deep learning framework to obtain superior quality OCT B-scans with reduced scanning duration and minimal patient discomfort.

Using the proposed network, we obtained denoised B-scans that were qualitatively similar to their corresponding multi-frame B-scans (Figure 3) and (Figure 4), owing to the reduction in noise levels. The mean SNR for the denoised B-scans was dB, a two-fold improvement (reduction in noise level) from improvement from dB that was obtained for the single-frame B-scans, thus offering an enhanced visibility of the ONH tissues. Given the significance of the neural (retinal layers) [79, 80, 81, 82, 83] and connective tissues (sclera and LC) [84, 85, 86, 87, 88], in ocular pathologies such as glaucoma [2], and age-related macular degeneration [89], their enhanced visibility is critical in a clinical setting. Furthermore, reduced noise levels would likely increase the robustness of aligning/registration algorithms used to monitor structural changes over time [18]. This is crucial for the management of multiple ocular pathologies [90, 91].

In denoised B-scans (vs single-frame B-scans), we consistently observed higher contrast across tissues. Our approach enhanced the visibility of small (e.g. RPE and photoreceptors) and low-intensity tissues (e.g. GCL and IPL; Figure 3 B). For all tissues, the mean CNR increased from (single-frame) to (denoised). Since existing automated segmentation algorithms rely on high contrast, we believe that our approach could potentially reduce the likelihood of segmentation errors that are relatively common in commercial algorithms [15, 16, 17, 92]. For instance, the incorrect segmentation of the RNFL can lead to inaccurate thickness measurements, leading to under-/over- estimation of glaucoma [19]. By using the denoising framework as a precursor to automated segmentation/thickness measurement, we could increase the reliability [93] of such clinical tools.

Upon denoising, we observed a five-fold increase in MSSIM (single-frame/denoised: ), when validated against the multi-frame B-scans. The preservation of features and structural information plays an important role in accurately measuring cellular level disruption to determine retinal pathology. For instance, the measurement of the ellipsoid zone (EZ) disruption [94] provides an insight into the photoreceptor structure, that is significant in pathologies such as diabetic retinopathy [95], macular hole [96], macular degeneration [97], and ocular trauma [98]. Existing multi-frame averaging techniques [30] significantly enhance and preserve the integrity of the structural information by supressing speckle noise [31, 40, 41, 42]. However, they are limited by a major clinical challenge: the inability of the patients to remain fixated for long scanning times,[32, 33] and the resultant discomfort [32].

In this study, we are proposing a methodology to significantly reduce scanning time while enhancing OCT signal quality. In our healthy subjects, it took on average 3.5 min to capture a ‘clean’ (multi frame) volume, and 25 s for a ‘noisy’ (single frame) volume. Since we can denoise a single B-scan in 20 ms (or 2 s for a volume of 97 B-scans), this means that we can theoretically generate a denoised OCT volume in about 27 seconds (= time of acquisition of the ‘noisy’ volume [25 s] + denoising processing [2 s]). Thus, we may be able to drastically reduce the scanning duration by more than 7 folds, while maintaining superior image quality.

Besides speckle noise, patient dependent factors such as cataract [99, 100, 101, 102] and/ or lack of tear film in dry eye can significantly diminish OCT scan quality [12, 99, 100, 101, 102, 103]. While lubricating eye drops and frequent blinking can instantly improve image quality for patients with corneal drying [103, 104], the detrimental effects of cataract on OCT image quality might be reduced only if cataract surgery is performed [12, 99, 100]. Moreover, pupillary dilation might be needed especially in subjects with small pupil sizes to obtain acceptable quality B-scans [12, 105], which is highly crucial in the monitoring of glaucoma [105]. Pupillary dilation is also time consuming and may cause patient discomfort [106]. It is plausible that the proposed framework, when extended, could be a solution to the afore-mentioned factors that limit image quality, avoiding the need for any additional clinical procedure.

In this study, several limitations warrant further discussion. First, the proposed network was trained and tested only on B-scans from one device (Spectralis). Every commercial OCT device has its own proprietary algorithm to pre-process the raw OCT data, potentially presenting a noise distribution different from what our network was trained with. Hence, we are unsure of our network’s performance on other devices. Nevertheless, we offer a proof of concept which could be validated by other groups on multiple commercial OCT devices.

Second, we were unable to train our network with a speckle noise model representative of the Spectralis device. Such a model is currently not provided by the manufacturer and would be extremely hard to reverse-engineer because information about all pre- and post-processing done to the OCT signal is also not provided. While there exist a number of OCT denoising studies that assume a Rayleigh [8]

/ Generalized Gamma distribution to describe speckle noise

[40], we observed that they were ill-suited for our network. From our experiments, the best denoising performance was obtained when our network was trained with a simple Gaussian noise model ( and ). It is possible that a thorough understanding of the raw noise distribution prior to the custom pre-processing on the OCT device could improve the performance of our network. We aim to test this hypothesis with a custom-built OCT system in our future works.

Third, while we have discussed the need for reliable clinical information from poor quality OCT scans, that could be critical for the diagnosis and management of ocular pathology (e.g., glaucoma) , we have yet to test the networks’ performance on pathological B-scans.

Fourth, we observed that the SNR and CNR metrics were higher for the denoised B-scans than their corresponding multi-frame B-scans. This could be attributed to over-smoothening (or blurring) of tissue textures that was consistently present in the denoised B-scans. We are currently exploring other deep learning techniques to improve the B-scan sharpness that is lost during denoising.

Fifth, we were unable to provide further validation of our algorithm by comparing our outputs to histology data. Such a validation would be extremely difficult, as one would need to first image a human ONH with OCT, process with histology, and register both datasets. Furthermore, while we believe our algorithm is able to restore tissue texture accurately (when comparing denoised B-scans with multi-frame B-scans), an exact validation of our approach is not possible. Long fixation times in obtaining the multi-frame B-scans lead to subtle motion artifacts (eye movements caused by microsaccades or unstable fixation) [43], displaced optic disc center [107], and axial misalignment [12], causing minor registration errors between the single-frame and multi-frame B-scans, thus preventing an exact comparison between the denoised B-scans and the multi-frame B-scans.

Finally, no quantitative measurements were performed on the denoised images to assess differences in tissue morphology between the denoised and multi-frame B-scans. Undertaking this work in the future could increase the clinical relevance of the denoised B-scans.

In conclusion, we have developed a custom deep learning approach to denoise single-frame OCT B-scans. With the proposed network, we were able to denoise a single-frame OCT B-scan in under 20 ms. We hope that the proposed framework could resolve the current trade-off in obtaining reliable and superior quality scans, with reduced scanning times and minimal patient discomfort. Finally, we believe that our approach may be helpful for low-cost OCT devices, whose noisy B-scans may be enhanced by artificial intelligence (as opposed to expensive hardware) to the same quality as in current commercial devices.


Acknowledgments

Singapore Ministry of Education Academic Research Funds Tier 1 (R-155-000-168-112 [AHT];R-397-000-294-114 [MJAG]); National University of Singapore (NUS) Young Investigator Award Grant (NUSYIAFY16P16; R-155-000-180-133; AHT); National University of Singapore Young Investigator Award Grant (NUSYIAFY13P03; R-397-000-174-133 [MJAG]); Singapore Ministry of Education Academic Research Funds Tier 2 (R-397-000-280-112 [MJAG]);National Medical Research Council (Grant NMRC/STAR/0023/2014 [TA]).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

  • [1] Mehreen Adhi and Jay S. Duker. Optical coherence tomography – current and future applications. Current opinion in ophthalmology, 24(3):213–221, 2013.
  • [2] Igor I. Bussel, Gadi Wollstein, and Joel S. Schuman. Oct for glaucoma diagnosis, screening and detection of glaucoma progression. British Journal of Ophthalmology, 98(Suppl 2):ii15, 2014.
  • [3] Ramiro S. Maldonado, Pradeep Mettu, Mays El-Dairi, and M. Tariq Bhatti. The application of optical coherence tomography in neurologic diseases. Neurology: Clinical Practice, 5(5):460–469, 2015.
  • [4] M. E. van Velthoven, D. J. Faber, F. D. Verbraak, T. G. van Leeuwen, and M. D. de Smet. Recent developments in optical coherence tomography for imaging the retina. Prog Retin Eye Res, 26(1):57–77, 2007.
  • [5] Guoying Feng Zhongping Chen Yongzhao Du, Gangjun Liu. Speckle reduction in optical coherence tomography images based on wave atoms. Journal of Biomedical Optics, 19(5), May 2014.
  • [6] M. Bashkansky and J. Reintjes. Statistics and reduction of speckle in optical coherence tomography. Optics Letters, 25(8):545–547, 2000.
  • [7] Kin Man Yung Joseph M. Schmitt, S. H. Xiang. Speckle in optical coherence tomography. Journal of Biomedical Optics, 4(1), January 1999.
  • [8] Ahmadreza Baghaie, Zeyun Yu, and Roshan M. D’Souza. State-of-the-art in retinal optical coherence tomography image analysis. Quantitative Imaging in Medicine and Surgery, 5(4):603–617, 2015.
  • [9] M. Szkulmowski, I. Gorczynska, D. Szlag, M. Sylwestrzak, A. Kowalczyk, and M. Wojtkowski. Efficient reduction of speckle noise in optical coherence tomography. Opt Express, 20(2):1337–59, 2012.
  • [10] Mahdad Esmaeili, Alireza Mehri Dehnavi, Hossein Rabbani, and Fedra Hajizadeh. Speckle noise reduction in optical coherence tomography using two-dimensional curvelet-based dictionary learning. Journal of Medical Signals and Sensors, 7(2):86–91, 2017.
  • [11] Zhongping Jian, Zhaoxia Yu, Lingfeng Yu, Bin Rao, Zhongping Chen, and Bruce J. Tromberg. Speckle attenuation in optical coherence tomography by curvelet shrinkage. Optics Letters, 34(10):1516–1518, 2009.
  • [12] Seth C. Nelson Diana Chao Joshua S. Hardin, Giovanni Taibbi and Gianmarco Vizzeri. Factors affecting cirrus-hd oct optic disc scan quality: A review with case examples. Journal of Ophthalmology, 2015, 2015.
  • [13] Kaweh Mansouri, Felipe A. Medeiros, Andrew J. Tatham, Nicholas Marchase, and Robert N. Weinreb. Evaluation of retinal and choroidal thickness by swept-source optical coherence tomography: Repeatability and assessment of artifacts. American journal of ophthalmology, 157(5):1022–1032, 2014.
  • [14] S. Asrani, L. Essaid, B. D. Alder, and C. Santiago-Turla. Artifacts in spectral-domain optical coherence tomography measurements in glaucoma. JAMA Ophthalmol, 132(4):396–402, 2014.
  • [15] Y. Liu, H. Simavli, C. J. Que, J. L. Rizzo, E. Tsikata, R. Maurer, and T. C. Chen. Patient characteristics associated with artifacts in spectralis optical coherence tomography imaging of the retinal nerve fiber layer in glaucoma. Am J Ophthalmol, 159(3):565–76.e2, 2015.
  • [16] S. Asrani, L. Essaid, B. D. Alder, and C. Santiago-Turla. Artifacts in spectral-domain optical coherence tomography measurements in glaucoma. JAMA Ophthalmology, 132(4):396–402, 2014.
  • [17] K. E. Kim, J. W. Jeoung, K. H. Park, D. M. Kim, and S. H. Kim. Diagnostic classification of macular ganglion cell and retinal nerve fiber layer analysis: differentiation of false-positives from glaucoma. Ophthalmology, 122(3):502–10, 2015.
  • [18] Madhusudhanan Balasubramanian, Christopher Bowd, Gianmarco Vizzeri, Robert N. Weinreb, and Linda M. Zangwill. Effect of image quality on tissue thickness measurements obtained with spectral-domain optical coherence tomography. Optics express, 17(5):4019–4036, 2009.
  • [19] S. L. Mansberger, S. A. Menda, B. A. Fortune, S. K. Gardiner, and S. Demirel. Automated segmentation errors when using optical coherence tomography to measure retinal nerve fiber layer thickness in glaucoma. Am J Ophthalmol, 174:1–8, 2017.
  • [20] N. Iftimia, B. E. Bouma, and G. J. Tearney. Speckle reduction in optical coherence tomography by ”path length encoded” angular compounding. J Biomed Opt, 8(2):260–3, 2003.
  • [21] A. E. Desjardins, B. J. Vakoc, W. Y. Oh, S. M. Motaghiannezam, G. J. Tearney, and B. E. Bouma. Angle-resolved optical coherence tomography with sequential angular selectivity for speckle reduction. Opt Express, 15(10):6200–9, 2007.
  • [22] T. Bajraszewski, M. Wojtkowski, M. Szkulmowski, A. Szkulmowska, R. Huber, and A. Kowalczyk. Improved spectral optical coherence tomography using optical frequency comb. Opt Express, 16(6):4163–76, 2008.
  • [23] B. F. Kennedy, T. R. Hillman, A. Curatolo, and D. D. Sampson. Speckle reduction in optical coherence tomography by strain compounding. Opt Lett, 35(14):2445–7, 2010.
  • [24] T. Klein, R. Andre, W. Wieser, T. Pfeiffer, and R. Huber. Joint aperture detection for speckle reduction and increased collection efficiency in ophthalmic mhz oct. Biomed Opt Express, 4(4):619–34, 2013.
  • [25] M. Pircher, E. Gotzinger, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger. Speckle reduction in optical coherence tomography by frequency compounding. J Biomed Opt, 8(3):565–9, 2003.
  • [26] J. M. Schmitt. Array detection for speckle reduction in optical coherence microscopy. Phys Med Biol, 42(7):1427–39, 1997.
  • [27] J. M. Schmitt. Restoration of optical coherence images of living tissue using the clean algorithm. J Biomed Opt, 3(1):66–75, 1998.
  • [28] J. M. Schmitt, S. H. Xiang, and K. M. Yung. Speckle in optical coherence tomography. J Biomed Opt, 4(1):95–105, 1999.
  • [29] V. Behar, D. Adam, and Z. Friedman. A new method of spatial compounding imaging. Ultrasonics, 41(5):377–84, 2003.
  • [30] Wei Wu, Ou Tan, Rajeev R. Pappuru, Huilong Duan, and David Huang. Assessment of frame-averaging algorithms in oct image analysis. Ophthalmic surgery, lasers & imaging retina, 44(2):168–175, 2013.
  • [31] Chieh-Li Chen, Hiroshi Ishikawa, Gadi Wollstein, Richard A. Bilonick, Larry Kagemann, and Joel S. Schuman. Virtual averaging making nonframe-averaged optical coherence tomography images comparable to frame-averaged images. Translational Vision Science & Technology, 5(1):1, 2016.
  • [32] Shahab Chitchian, Markus A. Mayer, Adam R. Boretsky, Frederik J. van Kuijk, and Massoud Motamedi. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform. Journal of Biomedical Optics, 17(11):116009, 2012.
  • [33] R. D. Ferguson, D. X. Hammer, L. A. Paunescu, S. Beaton, and J. S. Schuman. Tracking optical coherence tomography. Opt Lett, 29(18):2139–41, 2004.
  • [34] A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney. Speckle reduction in optical coherence tomography images using digital filtering. J Opt Soc Am A Opt Image Sci Vis, 24(7):1901–10, 2007.
  • [35] R. Bernardes, C. Maduro, P. Serranho, A. Araujo, S. Barbeiro, and J. Cunha-Vaz. Improved adaptive complex diffusion despeckling filter. Opt Express, 18(23):24048–59, 2010.
  • [36] A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi. General bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery. Opt Express, 18(8):8338–52, 2010.
  • [37] Feng Chen Liheng Bian, Jinli Suo and Qionghai Dai. Multi-frame denoising of high speed optical coherence tomography data using inter-frame and intra-frame priors. arXiv:1312.1931 [cs.CV], 2014.
  • [38] N. M. Grzywacz, J. de Juan, C. Ferrone, D. Giannini, D. Huang, G. Koch, V. Russo, O. Tan, and C. Bruni. Statistics of optical coherence tomography data from human retina. IEEE Trans Med Imaging, 29(6):1224–37, 2010.
  • [39] Milan Sonka Hossein Rabbani and Michael D. Abramoff. Optical coherence tomography noise reduction using anisotropic local bivariate gaussian mixture prior in 3d complex wavelet domain. International Journal of Biomedical Imaging, 2013, 2013.
  • [40] Muxingzi Li, Ramzi Idoughi, Biswarup Choudhury, and Wolfgang Heidrich. Statistical model for oct image denoising. Biomedical Optics Express, 8(9):3903–3917, 2017.
  • [41] Markus A. Mayer, Anja Borsdorf, Martin Wagner, Joachim Hornegger, Christian Y. Mardin, and Ralf P. Tornow. Wavelet denoising of multiframe optical coherence tomography data. Biomedical Optics Express, 3(3):572–589, 2012.
  • [42] I. Y. Wong, H. Koizumi, and W. W. Lai. Enhanced depth imaging optical coherence tomography. Ophthalmic Surg Lasers Imaging, 42 Suppl:S75–84, 2011.
  • [43] Lelia Adelina Paunescu Siobahn Beaton Joel S. Schuman R. Daniel Ferguson, Daniel X. Hammer. Tracking optical coherence tomography. Optics Letters, 29(18), 2004.
  • [44] D. Hammer, R. D. Ferguson, N. Iftimia, T. Ustun, G. Wollstein, H. Ishikawa, M. Gabriele, W. Dilworth, L. Kagemann, and J. Schuman. Advanced scanning methods with tracking optical coherence tomography. Opt Express, 13(20):7937–47, 2005.
  • [45] 3rd Wells, W. M., P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis. Multi-modal volume registration by maximization of mutual information. Med Image Anal, 1(1):35–51, 1996.
  • [46] Rupal R. Agravat and Mehul S. Raval. Chapter 11 - Deep Learning for Automated Brain Tumor Segmentation in MRI Images, pages 183–201. Academic Press, 2018.
  • [47] Zeynettin Akkus, Alfiia Galimzianova, Assaf Hoogi, Daniel L. Rubin, and Bradley J. Erickson. Deep learning for brain mri segmentation: State of the art and future directions. Journal of Digital Imaging, 30(4):449–459, 2017.
  • [48] Z. Cui, J. Yang, and Y. Qiao. Brain mri segmentation with patch-based cnn approach. In 2016 35th Chinese Control Conference (CCC), pages 7026–7031.
  • [49] J. Liu, Y. Pan, M. Li, Z. Chen, L. Tang, C. Lu, and J. Wang. Applications of deep learning to mri images: A survey. Big Data Mining and Analytics, 1(1):1–18, 2018.
  • [50] Heba Mohsen, El-Sayed A. El-Dahshan, El-Sayed M. El-Horbaty, and Abdel-Badeeh M. Salem. Classification using deep learning neural networks for brain tumors. Future Computing and Informatics Journal, 3(1):68–71, 2018.
  • [51] Joachim Buhmann Viktor Wegmayr, Sai Aitharaju. Classification of brain mri with big data and deep 3d convolutional neural networks. Proceedings Volume 10575, Medical Imaging 2018: Computer-Aided Diagnosis; 105751S, 2018.
  • [52] Roberto Ardon Isabelle Bloch Hadrien Bertrand, Matthieu Perrot. Classification of mri data using deep learning and gaussian process-based model selection. arXiv:1701.04355 [cs.LG], 2017.
  • [53] A. Benou, R. Veksler, A. Friedman, and T. Riklin Raviv. Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced mri sequences. Med Image Anal, 42:145–159, 2017.
  • [54] Luc Vosters Xiayu Xu Yue Sun Tao Tan Dongsheng Jiang, Weiqiang Dou. Denoising of 3d magnetic resonance images with multi-channel residual learning of convolutional neural network. arXiv:1712.08726 [cs.CV], 2017.
  • [55] Lovedeep Gondara. Medical image denoising using convolutional denoising autoencod. arXiv:1608.04667v2 [cs.CV], 2016.
  • [56] Xiaodan Sui, Yuanjie Zheng, Benzheng Wei, Hongsheng Bi, Jianfeng Wu, Xuemei Pan, Yilong Yin, and Shaoting Zhang. Choroid segmentation from optical coherence tomography with graph-edge weights learned from deep convolutional neural networks. Neurocomputing, 237:332–341, 2017.
  • [57] B. Al-Bander, B. M. Williams, M. A. Al-Taee, W. Al-Nuaimy, and Y. Zheng. A novel choroid segmentation method for retinal diagnosis using deep learning. In 2017 10th International Conference on Developments in eSystems Engineering (DeSE), pages 182–187.
  • [58] Qiao Zhang, Zhipeng Cui, Xiaoguang Niu, Shijie Geng, and Yu Qiao. Image segmentation with pyramid dilated convolution based on resnet and u-net. In Derong Liu, Shengli Xie, Yuanqing Li, Dongbin Zhao, and El-Sayed M. El-Alfy, editors, Neural Information Processing, pages 364–372. Springer International Publishing.
  • [59] Freerk G. Venhuizen, Bram van Ginneken, Bart Liefers, Mark J. J. P. van Grinsven, Sascha Fauser, Carel Hoyng, Thomas Theelen, and Clara I. Sánchez. Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks. Biomedical Optics Express, 8(7):3292–3316, 2017.
  • [60] Leyuan Fang, David Cunefare, Chong Wang, Robyn H. Guymer, Shutao Li, and Sina Farsiu. Automatic segmentation of nine retinal layer boundaries in oct images of non-exudative amd patients using deep learning and graph search. Biomedical Optics Express, 8(5):2732–2744, 2017.
  • [61] Sieun Lee Gavin Ding Marinko V.Sarunic Donghuan Lu, Morgan Heisler and Mirza Faisal Beg. Retinal fluid segmentation and detection in optical coherence tomography images using fully convolutional neural network. arXiv:1710.04778v1 [cs.CV] 13 Oct 2017, 2017.
  • [62] Sri Phani Krishna Karri Debdoot Sheet Amin Katouzian Christian Wachinger Nassir Navab Abhijit Guha Roy, Sailesh Conjeti. Relaynet: Retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. arXiv:1704.02161v2 [cs.CV] 7 Jul 2017.
  • [63] Sripad Krishna Devalla, Prajwal K. Renukanand, Bharathwaj K. Sreedhar, Giridhar Subramanian, Liang Zhang, Shamira Perera, Jean-Martial Mari, Khai Sing Chin, Tin A. Tun, Nicholas G. Strouthidis, Tin Aung, Alexandre H. Thiéry, and Michaël J. A. Girard. Drunet: a dilated-residual u-net deep learning network to segment optic nerve head tissues in optical coherence tomography images. Biomedical Optics Express, 9(7):3244–3265, 2018.
  • [64] S. K. Devalla, K. S. Chin, J. M. Mari, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiery, and M. J. A. Girard. A deep learning approach to digitally stain optical coherence tomography images of the optic nerve head. Invest Ophthalmol Vis Sci, 59(1):63–74, 2018.
  • [65] M. Awais, H. Müller, T. B. Tang, and F. Meriaudeau. Classification of sd-oct images using a deep learning approach. In 2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), pages 489–492.
  • [66] Philipp Prahs, Viola Radeck, Christian Mayer, Yordan Cvetkov, Nadezhda Cvetkova, Horst Helbig, and David Märker. Oct-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications. Graefe’s Archive for Clinical and Experimental Ophthalmology, 256(1):91–98, 2018.
  • [67] Cecilia S. Lee, Doug M. Baughman, and Aaron Y. Lee.

    Deep learning is effective for classifying normal versus age-related macular degeneration oct images.

    Ophthalmology Retina, 1(4):322–327, 2017.
  • [68] O. Ronneberger Brox, P.Fischer, and T. U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), 9351:234–241, 2015.
  • [69] Kaiming He Sun, Xiangyu Zhang, Shaoqing Ren, and Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
  • [70] Fisher Yu Koltun and Vladlen. Multi-scale context aggregation by dilated convolutions. ICLR.
  • [71] Y. Liu, M. M. Cheng, X. Hu, K. Wang, and X. Bai. Richer convolutional features for edge detection. In

    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 5872–5881.
  • [72] Djork-Arné Clevert and, Thomas Unterthiner and, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units(elus). CoRR, abs/1511.07289, 2015.
  • [73] Christian Szegedy Sergey Ioffe. Batch normalization: accelerating deep network training by reducing internal covariate shift. In

    Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37

    , pages 448–456.
  • [74] Jimmy Ba Diederik P. Kingma. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
  • [75] John C. Platt Patrice Y. Simard, Dave Steinkraus. Best practices for convolutional neural networks applied to visual document analysis. Proceedings of the Seventh International Conference on Document Analysis and Recognition (ICDAR 2003), 2003.
  • [76] Z. Wu, G. Xu, R. N. Weinreb, M. Yu, and C. K. Leung. Optic nerve head deformation in glaucoma: A prospective analysis of optic nerve head surface and lamina cribrosa surface displacement. Ophthalmology, 122(7):1317–29, 2015.
  • [77] M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari. Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head. Invest Ophthalmol Vis Sci, 52(10):7738–48, 2011.
  • [78] Wang Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  • [79] Abdullah Al-Mujaini, Upender K. Wali, and Sitara Azeem. Optical coherence tomography: Clinical applications in medical practice. Oman Medical Journal, 28(2):86–91, 2013.
  • [80] C. Bowd, R. N. Weinreb, J. M. Williams, and L. M. Zangwill. The retinal nerve fiber layer thickness in ocular hypertensive, normal, and glaucomatous eyes with optical coherence tomography. Arch Ophthalmol, 118(1):22–6, 2000.
  • [81] Gillian J. McLellan and Carol A. Rasmussen. Optical coherence tomography for the evaluation of retinal and optic nerve morphology in animal subjects: Practical considerations. Veterinary ophthalmology, 15(Suppl 2):13–28, 2012.
  • [82] A. Miki, F. A. Medeiros, R. N. Weinreb, S. Jain, F. He, L. Sharpsten, N. Khachatryan, N. Hammel, J. M. Liebmann, C. A. Girkin, P. A. Sample, and L. M. Zangwill. Rates of retinal nerve fiber layer thinning in glaucoma suspect eyes. Ophthalmology, 121(7):1350–8, 2014.
  • [83] T. Ojima, T. Tanabe, M. Hangai, S. Yu, S. Morishita, and N. Yoshimura. Measurement of retinal nerve fiber layer thickness and macular volume for glaucoma detection using optical coherence tomography. Jpn J Ophthalmol, 51(3):197–203, 2007.
  • [84] J. C. Downs, M. E. Ensor, A. J. Bellezza, H. W. Thompson, R. T. Hart, and C. F. Burgoyne. Posterior scleral thickness in perfusion-fixed normal and early-glaucoma monkey eyes. Invest Ophthalmol Vis Sci, 42(13):3202–8, 2001.
  • [85] K. M. Lee, T. W. Kim, R. N. Weinreb, E. J. Lee, M. J. Girard, and J. M. Mari. Anterior lamina cribrosa insertion in primary open-angle glaucoma patients and healthy subjects. PLoS One, 9(12):e114935, 2014.
  • [86] S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch. Lamina cribrosa depth in different stages of glaucoma. Invest Ophthalmol Vis Sci, 56(3):2059–64, 2015.
  • [87] H. A. Quigley and E. M. Addicks. Regional differences in the structure of the lamina cribrosa and their relation to glaucomatous optic nerve damage. Arch Ophthalmol, 99(1):137–43, 1981.
  • [88] H. A. Quigley, E. M. Addicks, W. R. Green, and A. E. Maumenee. Optic nerve damage in human glaucoma. ii. the site of injury and susceptibility to damage. Arch Ophthalmol, 99(4):635–49, 1981.
  • [89] Caio V. Regatieri, Lauren Branchini, and Jay S. Duker. The role of spectral-domain oct in the diagnosis and management of neovascular age-related macular degeneration. Ophthalmic surgery, lasers & imaging : the official journal of the International Society for Imaging in the Eye, 42(0):S56–S66, 2011.
  • [90] Ontario Health Quality. Optical coherence tomography for age-related macular degeneration and diabetic macular edema: An evidence-based analysis. Ontario Health Technology Assessment Series, 9(13):1–22, 2009.
  • [91] Vivek J. Srinivasan, Maciej Wojtkowski, Andre J. Witkin, Jay S. Duker, Tony H. Ko, Mariana Carvalho, Joel S. Schuman, Andrzej Kowalczyk, and James G. Fujimoto. High-definition and 3-dimensional imaging of macular pathologies with high-speed ultrahigh-resolution optical coherence tomography. Ophthalmology, 113(11):2054.e1–2054.14, 2006.
  • [92] Rayan A. Alshareef, Sunila Dumpala, Shruthi Rapole, Manideepak Januwada, Abhilash Goud, Hari Kumar Peguda, and Jay Chhablani. Prevalence and distribution of segmentation errors in macular ganglion cell analysis of healthy eyes using cirrus hd-oct. PLOS ONE, 11(5):e0155319, 2016.
  • [93] A. Stankiewicz, T. Marciniak, A. Dąbrowski, M. Stopa, P. Rakowicz, and E. Marciniak. Denoising methods for improving automatic segmentation in oct images of human eye. Bulletin of the Polish Academy of Sciences Technical Sciences, 65(1), 2017.
  • [94] Drew Scoles, John A. Flatter, Robert F. Cooper, Christopher S. Langlo, Scott Robison, Maureen Neitz, David V. Weinberg, Mark E. Pennesi, Dennis P. Han, Alfredo Dubra, and Joseph Carroll. Assessing photoreceptor structure associated with ellipsoid zone disruptions visualized with optical coherence tomography. Retina (Philadelphia, Pa.), 36(1):91–103, 2016.
  • [95] Timothy S. Kern and Bruce A. Berkowitz. Photoreceptors in diabetic retinopathy. Journal of Diabetes Investigation, 6(4):371–380, 2015.
  • [96] T. Baba, S. Yamamoto, M. Arai, E. Arai, T. Sugawara, Y. Mitamura, and S. Mizunoya. Correlation of visual recovery and presence of photoreceptor inner/outer segment junction in optical coherence images after successful macular hole repair. Retina, 28(3):453–8, 2008.
  • [97] Hisako Hayashi, Kenji Yamashiro, Akitaka Tsujikawa, Masafumi Ota, Atsushi Otani, and Nagahisa Yoshimura. Association between foveal photoreceptor integrity and visual outcome in neovascular age-related macular degeneration. American Journal of Ophthalmology, 148(1):83–89.e1, 2009.
  • [98] J. A. Flatter, R. F. Cooper, M. J. Dubow, A. Pinhas, R. S. Singh, R. Kapur, N. Shah, R. D. Walsh, S. H. Hong, D. V. Weinberg, K. E. Stepien, W. J. Wirostko, S. Robison, A. Dubra, R. B. Rosen, Jr. Connor, T. B., and J. Carroll. Outer retinal structure after closed-globe blunt ocular trauma. Retina, 34(10):2133–46, 2014.
  • [99] Maria P. Bambo, Elena Garcia-Martin, Sofia Otin, Eva Sancho, Isabel Fuertes, Raquel Herrero, Maria Satue, and Luis Pablo. Influence of cataract surgery on repeatability and measurements of spectral domain optical coherence tomography. British Journal of Ophthalmology, 98(1):52, 2014.
  • [100] Pauline H. B. Kok, Thomas J. T. P. van den Berg, Hille W. van Dijk, Marilette Stehouwer, Ivanka J. E. van der Meulen, Maarten P. Mourits, and Frank D. Verbraak. The relationship between the optical density of cataract and its influence on retinal nerve fibre layer thickness measured with spectral domain optical coherence tomography. Acta Ophthalmologica, 91(5):418–424, 2012.
  • [101] J. C. Mwanza, A. M. Bhorade, N. Sekhon, J. J. McSoley, S. H. Yoo, W. J. Feuer, and D. L. Budenz. Effect of cataract and its removal on signal strength and peripapillary retinal nerve fiber layer optical coherence tomography measurements. J Glaucoma, 20(1):37–43, 2011.
  • [102] G. Savini, M. Zanini, and P. Barboni. Influence of pupil size and cataract on retinal nerve fiber layer thickness measurements by stratus oct. J Glaucoma, 15(4):336–40, 2006.
  • [103] Daniel M. Stein, Gadi Wollstein, Hiroshi Ishikawa, Ellen Hertzmark, Robert J. Noecker, and Joel S. Schuman. Effect of corneal drying on optical coherence tomography. Ophthalmology, 113(6):985–991, 2006.
  • [104] Jason W. Much Nicola G. Ghazi. The effect of lubricating eye drops on optical coherence tomography imaging of the retina. Digital Journal of Ophthalmology, 15(2), 2009.
  • [105] Michael Smith, Andrew Frost, Christopher Mark Graham, and Steven Shaw. Effect of pupillary dilatation on glaucoma assessments using optical coherence tomography. The British Journal of Ophthalmology, 91(12):1686–1690, 2007.
  • [106] E. Moisseiev, D. Loberman, E. Zunz, A. Kesler, A. Loewenstein, and J. Mandelblum. Pupil dilation using drops vs gel: a comparative study. Eye, 29:815, 2015.
  • [107] Joong Won Shin, Yong Un Shin, Ki Bang Uhm, Kyung Rim Sung, Min Ho Kang, Hee Yoon Cho, and Mincheol Seong. The effect of optic disc center displacement on retinal nerve fiber layer measurement determined by spectral domain optical coherence tomography. PLOS ONE, 11(10):e0165538, 2016.