Simultaneous Alignment and Surface Regression Using Hybrid 2D-3D Networks for 3D Coherent Layer Segmentation of Retina OCT Images

03/04/2022
by   Hong Liu, et al.
Xiamen University
Tencent
0

Automated surface segmentation of retinal layer is important and challenging in analyzing optical coherence tomography (OCT). Recently, many deep learning based methods have been developed for this task and yield remarkable performance. However, due to large spatial gap and potential mismatch between the B-scans of OCT data, all of them are based on 2D segmentation of individual B-scans, which may loss the continuity information across the B-scans. In addition, 3D surface of the retina layers can provide more diagnostic information, which is crucial in quantitative image analysis. In this study, a novel framework based on hybrid 2D-3D convolutional neural networks (CNNs) is proposed to obtain continuous 3D retinal layer surfaces from OCT. The 2D features of individual B-scans are extracted by an encoder consisting of 2D convolutions. These 2D features are then used to produce the alignment displacement field and layer segmentation by two 3D decoders, which are coupled via a spatial transformer module. The entire framework is trained end-to-end. To the best of our knowledge, this is the first study that attempts 3D retinal layer segmentation in volumetric OCT images based on CNNs. Experiments on a publicly available dataset show that our framework achieves superior results to state-of-the-art 2D methods in terms of both layer segmentation accuracy and cross-B-scan 3D continuity, thus offering more clinical values than previous works.

READ FULL TEXT VIEW PDF

Authors

page 8

page 12

06/11/2022

Differentiable Projection from Optical Coherence Tomography B-Scan without Retinal Layer Segmentation Supervision

Projection map (PM) from optical coherence tomography (OCT) B-scan is an...
04/07/2017

ReLayNet: Retinal Layer and Fluid Segmentation of Macular Optical Coherence Tomography using Fully Convolutional Network

Optical coherence tomography (OCT) is used for non-invasive diagnosis of...
10/14/2020

Efficient and high accuracy 3-D OCT angiography motion correction in pathology

We propose a novel method for non-rigid 3-D motion correction of orthogo...
10/15/2020

RetiNerveNet: Using Recursive Deep Learning to Estimate Pointwise 24-2 Visual Field Data based on Retinal Structure

Glaucoma is the leading cause of irreversible blindness in the world, af...
06/25/2021

Circumpapillary OCT-Focused Hybrid Learning for Glaucoma Grading Using Tailored Prototypical Neural Networks

Glaucoma is one of the leading causes of blindness worldwide and Optical...
10/08/2020

Clinically Verified Hybrid Deep Learning System for Retinal Ganglion Cells Aware Grading of Glaucomatous Progression

Objective: Glaucoma is the second leading cause of blindness worldwide. ...
07/02/2020

Globally Optimal Surface Segmentation using Deep Learning with Learnable Smoothness Priors

Automated surface segmentation is important and challenging in many medi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optical coherence tomography (OCT)—a non-invasive imaging technique based on the principle of low-coherence interferometry—can acquire 3D cross-section images of human tissue at micron resolutions [12]. Due to its micron-level axial resolution, non-invasiveness, and fast speed, OCT is commonly used in eye clinics for diagnosis and management of retinal diseases [1]. Notably, OCT provides a unique capability to directly visualize the stratified structure of the retina of cell layers, whose statuses are biomarkers of presence/severity/prognosis for a variety of retinal and neurodegenerative diseases, including age-related macular degeneration [14], diabetic retinopathy [4], glaucoma [13], Alzheimer’s disease [16], and multiple sclerosis [24]. Usually, layer segmentation is the first step in quantitative analysis of retinal OCT images, yet can be considerably labor-intensive, time-consuming, and subjective if done manually. Therefore, computerized tools for automated, prompt, objective, and accurate retinal layer segmentation in OCT images is desired by both clinicians and researchers.

Automated layer segmentation in retinal OCT images has been well explored. Earlier explorations included graph based [2, 9, 17, 25], contour modeling [5, 19, 29]

, and machine learning

[2, 17] methods. Although greatly advanced the field, most of these classical methods relied on empirical rules and/or hand-crafted features which may be difficult to generalize. Motivated by the success of deep convolutional neural networks (CNNs) in various medical image analysis tasks [15], researchers also implemented CNNs for retinal layer segmentation in OCT images and achieved superior performance to classical methods [10]. However, most previous methods (both classical and CNNs) segmented each OCT slice (called a B-scan) separately given the relatively big inter-B-scan distance, despite the fact that a modern OCT sequence actually consists of many B-scans covering a volumetric area of the eye [8]. Correspondingly, these methods failed to utilize the anatomical prior that the retinal layers are generally smooth surfaces (instead of independent curves in each B-scan) and may be subject to discontinuity in the segmented layers between adjacent B-scans, potentially affecting volumetric analysis following layer segmentation. Although some works [2, 5, 6, 9, 17, 19] attempted 3D OCT segmentation, all of them belong to the classical methods that yielded inferior performance to the CNN-based ones, and overlooked the misalignment problem among the B-scans of an OCT volume. Besides the misalignment problem, to develop a CNN-based method for 3D OCT segmentation there is another obstacle: anisotropy in resolution [26]. For example, the physical resolutions of the dataset employed in this work are 3.24 m (within A-scan, which is a column in a B-scan image), 6.7 m (cross A-scan), and 67 m (cross B-scan).

In this work, we propose a novel CNN-based 2D-3D hybrid framework for simultaneous B-scan alignment and 3D surface regression for coherent retinal layer segmentation across B-scans in OCT images. This framework consists of a shared 2D encoder followed by two 3D decoders (the alignment branch and segmentation branch), and a spatial transformer module (STM) inserted to the shortcuts [22] between the encoder and the segmentation branch. Given a B-scan volume as input, we employ per B-scan 2D operations for the encoder for two reasons. First, as suggested by previous studies [27, 30]

, intra-slice feature extraction followed by inter-slice (2.5D or 3D) aggregation is an effective strategy against anisotropic resolution, thus we propose a similar 2D-3D hybrid structure for the anisotropic OCT data. Second, the B-scans in the input volume are subject to misalignment, thus 3D operations across B-scans prior to proper realignment may be invalid. Following the encoder, the alignment branch employs 3D operations to aggregate features across B-scans to align them properly. Then, the resulting displacement field is employed to align the 2D features at different scales and compose well-aligned 3D features by the STM. These 3D features are passed to the segmentation branch for 3D surface regression. Noteworthily, the alignment only insures validity of subsequent 3D operations, but not the cross-B-scan coherence of the regressed layer surfaces. Hence, we further employ a gradient-based, 3D regulative loss

[28] on the regressed surfaces to encourage smooth surfaces, which is an intrinsic property of many biological layers. While it is straightforward to implement this loss within our surface regression framework and comes for free (no manual annotation is needed), it proves effective in our experiments. Lastly, the entire framework is trained end-to-end.

In summary, our contributions are as following. First, we propose a new framework for simultaneous B-scan alignment and 3D retinal layer segmentation for OCT images. This framework features a hybrid 2D-3D structure comprising a shared 2D encoder, a 3D alignment branch, a 3D surface regression branch, and an STM to allow for simultaneous alignment and 3D segmentation of the anisotropic OCT data. Second, we further propose two conceptually straightforward and easy-to-implement regulating losses to encourage the regressed layer surfaces to be coherent—not only within but also across B-scans, and also help align the B-scans. Third, we conduct thorough experiments to validate our design and demonstrate its superiority over existing methods.

2 Method

2.0.1 Problem Formulation

Let , then a 3D OCT volume can be written as a real-valued function , where the and axis are the row and column directions of a B-scan image, and axis is orthogonal to the B-scan image. Alternatively, can be considered as an ordered collection of all its B-scans: , where is the th B-scan image, , , and is the number of B-scans. Then, a retinal layer surface can be expressed by , where , is the number of A-scans, and is the row index indicating the surface location in the th A-scan of the th B-scan. That is, the surface intersects with each A-scan exactly once, which is a common assumption about macular OCT images (e.g., in [10]). The goal of this work is to locate a set of retinal layer surfaces of interest in , preferably being smooth, for accurate segmentation of the layers.

2.0.2 Method Overview

The overview of our framework is shown in Fig 1. The framework comprises three major components: a contracting path (the shared encoder) consisting of 2D CNN layers and two expansive paths consisting of 3D CNN layers (the alignment branch) and (the segmentation branch), and a functional module: the spatial transformer module (STM). During feature extraction phase, 2D features of separate B-scans in an OCT volume are extracted by . These features are firstly used to generate B-scans alignment displacement by , which is used in turn to align the 2D features via the STM. Then, the well-aligned features are fed to to yield final segmentation. Each of and forms a hybrid 2D-3D residual U-Net [22] with . The entire framework is trained end-to-end. As is implemented as a simple adaption to the encoder in [31] (3D to 2D), below we focus on describing our novel , , and STM.

Figure 1: Overview of the proposed framework.

2.0.3 B-Scan Alignment Branch

Due to the image acquisition process wherein each B-scan is acquired separately without a guaranteed global alignment and the inevitable eye movement, consecutive B-scans in an OCT volume may be subject to misalignment [7]. The mismatch mainly happens along the axis, and may cause problems for volumetric analysis of the OCT data if unaddressed. Although it is feasible to add an alignment step while preprocessing, a comprehensive framework that couples the B-scan alignment and layer segmentation would mutually benefit each other (supported by our experimental results), besides being more integrated. To this end, we introduce a B-scan alignment branch consisting of an expansive path into our framework, which takes 2D features extracted from a set of B-scans by

and stacks them along the cross-B-scan direction to form 3D input. The alignment branch outputs a displacement vector

, with each element indicating the displacement of the th B-scan in the direction. We use the local normalized cross-correlation (NCC) [3] of adjacent B-Scans as the optimization objective (denoted by ) of .

As smoothness is one of the intrinsic properties of the retinal layers, if the B-scans are aligned properly, ground truth surface positions of the same layer should be close at nearby locations of adjacent B-scans. To model this prior, we propose a supervised loss function to help with the alignment:

(1)

where is the ground truth. The final optimization objective of the alignment branch is .

2.0.4 Layer Segmentation Branch

Our layer segmentation branch substantially extends the fully convolutional boundary regression (FCBR) framework by He et al. [10]. Above all, we replace the purely 2D FCBR framework by a hybrid 2D-3D framework, to perform 3D surface regression in an OCT volume instead of 2D boundary regression in separate B-scans. On top of that, we propose a global smoothness guarantee loss to encourage coherent surfaces both within and across B-scans, whereas FCBR only enforces within B-scan smoothness. Third, our segmentation branch is coupled with the B-scan alignment branch, which boost the performance of each other.

The segmentation branch has two output heads sharing the same decoder: the primary head which outputs the surface position distribution for each A-scan, and the secondary head which outputs pixel-wise semantic labels. The secondary head is used only to provide an additional task for training the network, especially considering its pixel-wise dense supervision. Eventually the output of the secondary head is ignored during testing. We follow He et al. to use a combined Dice and cross entropy loss [23] for training the secondary head, and refer interested readers to [10] for more details.

Surface Distribution Head

This primary head generates an independent surface position distribution for each A-scan, where is the network parameters, and a higher value indicates a higher possibility that the th row is on the surface. Like in [10], a cross entropy loss is used to train the primary head:

(2)

where is the number of rows in an A scan, is the indicator function where if is evaluated to be true and zero otherwise. Further, a smooth L1 loss is adopted to directly guide the predicted surface location to be the ground truth: where , and is obtained via the soft-argmax:

Global Coherence Loss

Previous studies have shown the effectiveness of modeling prior knowledge that reflects anatomical properties such as the structural smoothness [28] in medical image segmentation. Following this line, we also employ a global smoothness loss to encourage the detected retinal surface to be coherent both within and across the aligned B-scans based on its gradients:

(3)

Finally, the overall optimization objective of the segmentation branch is , where

is a hyperparameter controlling the influence of the global coherence loss.

2.0.5 Spatial Transformer Module

The B-Scans displacement field output by the alignment branch is used to align features extracted by , so that the 3D operations of the segmentation branch are valid. To do so, we propose to add a spatial transformer module (STM) [18] to the shortcuts between and . It is worth noting that the STM adaptively resizes to suit the size of the features at different scales, and that it allows back prorogation during optimization [18]. In this way, we couple the B-scan alignment and retinal layer segmentation in our framework for an integrative end-to-end training, which not only simplifies the entire pipeline but also boosts the segmentation performance as validated by our experiments.

3 Experiments

3.0.1 Dataset and Preprocessing

The public SD-OCT dataset [21] includes both normal (265) and age-related macular degeneration (AMD) (115) cases. The images were acquired using the Bioptigen Tabletop SD-OCT system (Research Triangle Park, NC). The physical resolutions are 3.24 m (within A-scan), 6.7 m (cross A-scan), and 0.067 mm (cross B-scan). Since the manual annotations are only available for a region centered at the fovea, subvolumes of size 40040512 (, , and ) voxels are extracted around the fovea. We train the model on 263 subjects and test on the other 72 subjects (some cases are eliminated from analysis as the competing alignment algorithm [20] fails to handle them), which are randomly divided with the proportion of AMD cases unchanged. The inner aspect of the inner limiting membrane (ILM), the inner aspect of the retinal pigment epithelium drusen complex (IRPE), and the outer aspect of Bruch’s membrane (OBM) were manually traced. For the multi-surface segmentation, there are two considerations. First, we employ the topology guarantee module [10] to make sure the correct order of the surfaces. Second, the natural smoothness of these surfaces are different. Therefore, we set different (weight of ) values for different surfaces, according to the extents of smoothness and preliminary experimental results. As to preprocessing, an intensity gradient method [17]

is employed to flatten the retinal B-Scan image to the estimated Bruch’s membrane (BM), which can reduce memory usage. When standalone B-scan alignment is needed, the NoRMCore algorithm

[20] is employed.

For B-scan alignment, we adopt the mean absolute distance (MAD) of the same surface on two adjacent B-scans, and the average NCC between aligned B-scans for quantitative evaluation. For retinal surface segmentation, the MAD between predicted and ground truth surface positions is used. To compare the cross-B-scan continuity of the surfaces segmented by different methods, inspired by [11], we calculate the surface distance between adjacent B-Scans as the statistics of flatness and plot the histogram for inspection.

3.0.2 Implementation

The PyTorch framework (1.4.0) is used for experiments. Implementation of our proposed network follows the architecture proposed in Model Genesis

[31], except that the 3D layers of the feature extractor is changed to 2D. To reduce the number of network parameters, we halve the number of channels in each CNN block. All networks are trained form scratch. Due to the memory limit, OCT volumes are cropped into patches of 320400

40 voxels for training. We utilize the Adam optimizer and train for 120 epochs. The learning rate is initialized to 0.001 and halved when the

loss has not improved for ten consecutive epochs. We train the network on three 2080 Ti GPUs with a mini-batch size of nine patches. Based on preliminary experiments and natural smoothness of the three target surfaces, is set to 0, 0.3, and 0.5 for ILM, IRPE, and OBM, respectively. The source code is available at: https://github.com/ccarliu/Retinal-OCT-LayerSeg.git.

Figure 2: B-scan alignment results visualized via cross sections. Each B-Scan is repeated eight times for better visualization. Left to right: no alignment (flattened to the BM)/NoRMCore [20]/ours. Yellow: ILM, blue: IRPE, and green: OBM.

width=.8 Methods ILM (MAD) IRPE (MAD) OBM (MAD) Average (MAD) NCC No alignment 3.91 4.17 3.93 4.00 0.0781 NoRMCore [20] 1.74 2.19 1.87 1.93 0.0818 Ours 1.55 2.11 1.78 1.81 0.0894

Table 1: B-scan alignment results. MAD: mean absolute distance (in pixels). NCC: normalized cross-correlation.

3.0.3 B-Scan Alignment Results

Figure 2 shows the cross-B-scan sections of an OCT volume before and after alignment. As we can see, obvious mismatches between B-scans can be observed before alignment, and both alignment algorithms make the B-scans more aligned. While it is hard to tell from the visualizations, quantitative results in Table 1 suggest that our framework aligns the B-scans better than the NoRMCore [20], with generally lower MADs and higher NCCs.

width=0.8 Methods FCBR [10] Proposed no_align pre_align no_smooth 3D-3D ILM (AMD) 1.732.50 1.762.39 2.253.77 1.802.36 1.681.84 1.872.19 ILM (Normal) 1.240.51 1.260.47 1.400.42 1.300.49 1.270.47 1.310.46 IRPE (AMD) 3.092.09 3.041.79 3.141.72 3.091.79 3.101.97 3.121.74 IRPE (Normal) 2.061.51 2.101.36 2.181.37 2.051.40 2.131.45 2.131.45 OBM (AMD) 4.945.35 4.432.68 4.963.26 4.753.61 4.843.43 4.782.99 OBM (Normal) 2.280.36 2.400.39 2.490.40 2.340.37 2.450.41 2.430.40 Overall 2.783.31 2.712.25 3.002.82 2.772.59 2.812.48 2.852.34

Table 2: Mean absolute distance (m) as surface errors standard deviation.
Figure 3: Visualization of the manual segmentation (left), segmentation by FCBR [10] (middle), and segmentation by our framework (right) of an AMD case. Visualization of a normal control is shown in Fig. S1. Yellow: ILM, blue: IRPE, and green: OBM.

3.0.4 Surface Segmentation Results

The results are presented in Table 2. First, we compare our proposed method to FCBR [10] (empirically tuned for optimal performance), which is a state-of-the-art method based on 2D surface regression. As we can see, our method achieves lower average MADs with lower standard deviations (example segmentations in Figs. 3 and S1). In addition, we visualize surface positions of BM as depth fields in Fig S2. For a fair comparison, we visualize the FCBR results aligned by NoRMCore [20]. It can be observed that our method (Fig. S2(d)) produces a smoother depth field than FCBR does (Fig. S2(c)).

Figure 4: Histogram of the surface distance (in pixels) between adjacent B-Scans.

Next, we conduct ablation experiments to verify the effectiveness of each module in the proposed framework. Specifically, we evaluate several variants of our model: no_smooth (without the global coherence loss ), no_align (without the alignment branch or pre-alignment), pre_align (without the alignment branch but pre-aligned by NoRMCore [20]), 3D-3D (replacing the encoder by 3D CNNs). The results are presented in Table 2, from which several conclusions can be drawn. First, the variant without any alignment yields the worst results, suggesting that the mismatch between B-scans does have a negative impact on 3D analysis of OCT data such as our 3D surface segmentation. Second, our full model with the alignment branch improves over pre_align. We speculate this is because the alignment branch can produce better alignment results, and more importantly, it produces a slightly different alignment each time, serving as a kind of data and feature augmentation of enhanced diversity for the segmentation decoder . Third, removing apparently decreases the performance, demonstrating its effectiveness in exploiting the anatomical prior of smoothness. Lastly, our hybrid 2D-3D framework outperforms its counterpart 3D-3D network, indicating that the 2D CNNs can better deal with the mismatched B-scans prior to proper realignment.

3.0.5 B-Scans Connectivity Analysis

As shown in Fig. 4, surfaces segmented by our method has better cross-B-scan connectivity than those by FCBR [10] even with pre-alignment, as indicated by the more conspicuous spikes clustered around 0. This suggests that merely conducting 3D alignment does not guarantee 3D continuity, if the aligned B-scans are handled separately. It is worth noting that our method achieves even better cross-B-scan connectivity than the manual annotations after alignment, likely due to the same reason (i.e., human annotators work with one B-scan at a time).

4 Conclusion

This work presented a novel hybrid 2D-3D framework for simultaneous B-scan alignment and retinal surface regression of volumetric OCT data. The key idea behind our framework was the global coherence of the retinal layer surfaces both within and across B-scans. Experimental results showed that our framework was superior to the existing state-of-the-art method [10] for retinal layer segmentation, and verified the effectiveness of the newly proposed modules of our framework. In the future, we plan to evaluate our framework on additional datasets with more severe diseases and more annotated layers.

4.0.1 Acknowledgments.

This work was supported by the Fundamental Research Funds for the Central Universities (Grant No. 20720190012), Key-Area Research and Development Program of Guangdong Province, China (No. 2018B010111001), and Scientific and Technical Innovation 2030 - “New Generation Artificial Intelligence” Project (No. 2020AAA0104100).

References

  • [1] M. D. Abràmoff, M. K. Garvin, and M. Sonka (2010) Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 3, pp. 169–208. Cited by: §1.
  • [2] B. J. Antony, M. D. Abràmoff, M. M. Harper, et al. (2013) A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes. Biomed. Opt. Express 4 (12), pp. 2712–2728. Cited by: §1.
  • [3] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca (2019) VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans. Med. Imag. 38 (8), pp. 1788–1800. Cited by: §2.0.3.
  • [4] J. C. Bavinger, G. E. Dunbar, M. S. Stem, et al. (2016) The effects of diabetic retinopathy and pan-retinal photocoagulation on photoreceptor cell function as assessed by dark adaptometry. Invest. Ophthalmol. Vis. Sci. 57 (1), pp. 208–217. Cited by: §1.
  • [5] A. Carass, A. Lang, M. Hauser, P. A. Calabresi, H. S. Ying, and J. L. Prince (2014) Multiple-object geometric deformable model for segmentation of macular OCT. Biomed. Opt. Express 5 (4), pp. 1062–1074. Cited by: §1.
  • [6] Z. Chen, H. Wei, H. Shen, et al. (2018) Intraretinal layer segmentation and parameter measurement in optic nerve head region through energy function of spatial-gradient continuity constraint. J. Cent. South Univ. 25 (8), pp. 1938–1947. Cited by: §1.
  • [7] J. Cheng, J. A. Lee, G. Xu, Y. Quan, E. P. Ong, and D. W. Kee Wong (2016) Motion correction in optical coherence tomography for multi-modality retinal image registration. Cited by: §2.0.3.
  • [8] W. Drexler and J. G. Fujimoto (2008) State-of-the-art retinal optical coherence tomography. Prog. Retin. Eye Res. 27 (1), pp. 45–88. Cited by: §1.
  • [9] M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka (2009) Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE Trans. Med. Imag. 28 (9), pp. 1436–1447. Cited by: §1.
  • [10] Y. He, A. Carass, Y. Liu, et al. (2019) Fully convolutional boundary regression for retina OCT segmentation. In Int. Conf. MICCAI, pp. 120–128. Cited by: §1, §2.0.1, §2.0.4, §2.0.4, §2.0.4, Figure 3, §3.0.1, §3.0.4, §3.0.5, Table 2, Figure S1, Figure S2, §4.
  • [11] Y. He, A. Carass, Y. Liu, et al. (2021) Structured layer surface segmentation for retina OCT using fully convolutional regression networks. Med. Image Anal. 68, pp. 101856. Cited by: §3.0.1.
  • [12] D. Huang, E. A. Swanson, C. P. Lin, et al. (1991) Optical coherence tomography. Science 254 (5035), pp. 1178–1181. Cited by: §1.
  • [13] V. Kansal, J. J. Armstrong, R. Pintwala, and C. Hutnik (2018) Optical coherence tomography for glaucoma diagnosis: An evidence based meta-analysis. PloS one 13 (1), pp. e0190621. Cited by: §1.
  • [14] P. A. Keane, S. Liakopoulos, R. V. Jivrajka, et al. (2009) Evaluation of optical coherence tomography retinal thickness parameters for use in clinical trials for neovascular age-related macular degeneration. Invest. Ophthalmol. Vis. Sci. 50 (7), pp. 3378–3385. Cited by: §1.
  • [15] J. Ker, L. Wang, J. Rao, and T. Lim (2017) Deep learning applications in medical image analysis. IEEE Access 6, pp. 9375–9389. Cited by: §1.
  • [16] B. Knoll, J. Simonett, N. J. Volpe, et al. (2016) Retinal nerve fiber layer thickness in amnestic mild cognitive impairment: Case-control study and meta-analysis. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring 4, pp. 85–93. Cited by: §1.
  • [17] A. Lang, A. Carass, M. Hauser, et al. (2013) Retinal layer segmentation of macular OCT images using boundary classification. Biomed. Opt. Express 4 (7), pp. 1133–1152. Cited by: §1, §3.0.1.
  • [18] H. Li and Y. Fan (2017) Non-rigid image registration using fully convolutional networks with deep self-supervision. arXiv preprint arXiv:1709.00799. Cited by: §2.0.5.
  • [19] J. Novosel, K. A. Vermeer, J. H. De Jong, Z. Wang, and L. J. Van Vliet (2017) Joint segmentation of retinal layers and focal lesions in 3-D OCT data of topologically disrupted retinas. IEEE Trans. Med. Imag. 36 (6), pp. 1276–1286. Cited by: §1.
  • [20] E. A. Pnevmatikakis and A. Giovannucci (2017) NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, pp. 83–94. Cited by: Figure 2, §3.0.1, §3.0.3, §3.0.4, §3.0.4, Table 1, Figure S2.
  • [21] Quantitative classification of eyes with and without intermediate age-related macular degeneration using optical coherence tomography. Cited by: §3.0.1.
  • [22] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Int. Conf. MICCAI, pp. 234–241. Cited by: §1, §2.0.2.
  • [23] A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab (2017) ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed. Opt. Express 8 (8), pp. 3627–3642. Cited by: §2.0.4.
  • [24] S. Saidha, S. B. Syc, M. A. Ibrahim, et al. (2011) Primary retinal pathology in multiple sclerosis as detected by optical coherence tomography. Brain 134 (2), pp. 518–533. Cited by: §1.
  • [25] A. Shah, M. D. Abámoff, and X. Wu (2019) Optimal surface segmentation with convex priors in irregularly sampled space. Med. Image Anal. 54, pp. 63–75. Cited by: §1.
  • [26] A. Shah, L. Zhou, M. D. Abrámoff, and X. Wu (2018) Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images. Biomed. Opt. Express 9 (9), pp. 4509–4526. Cited by: §1.
  • [27] S. Wang, S. Cao, Z. Chai, et al. (2020) Conquering data variations in resolution: a slice-aware multi-branch decoder network. IEEE Trans. Med. Imag. 39 (12), pp. 4174–4185. Cited by: §1.
  • [28] D. Wei, S. Weinstein, M. Hsieh, L. Pantalone, and D. Kontos (2018) Three-dimensional whole breast segmentation in sagittal and axial breast MRI with dense depth field modeling and localized self-adaptation for chest-wall line detection. IEEE Trans. Biomed. Eng. 66 (6), pp. 1567–1579. Cited by: §1, §2.0.4.
  • [29] A. Yazdanpanah, G. Hamarneh, B. Smith, and M. Sarunic (2009) Intra-retinal layer segmentation in optical coherence tomography using an active contour approach. In Int. Conf. MICCAI, pp. 649–656. Cited by: §1.
  • [30] J. Zhang, Y. Xie, P. Zhang, H. Chen, Y. Xia, and C. Shen (2019) Light-weight hybrid convolutional network for liver tumor segmentation.. In IJCAI, pp. 4271–4277. Cited by: §1.
  • [31] Z. Zhou, V. Sodha, M. M. R. Siddiquee, et al. (2019) Models Genesis: generic autodidactic models for 3D medical image analysis. In Int. Conf. MICCAI, pp. 384–393. Cited by: §2.0.2, §3.0.2.