Deep Slice Interpolation via Marginal Super-Resolution, Fusion and Refinement

08/15/2019 ∙ by Cheng Peng, et al. ∙ 2

We propose a marginal super-resolution (MSR) approach based on 2D convolutional neural networks (CNNs) for interpolating an anisotropic brain magnetic resonance scan along the highly under-sampled direction, which is assumed to axial without loss of generality. Previous methods for slice interpolation only consider data from pairs of adjacent 2D slices. The possibility of fusing information from the direction orthogonal to the 2D slices remains unexplored. Our approach performs MSR in both sagittal and coronal directions, which provides an initial estimate for slice interpolation. The interpolated slices are then fused and refined in the axial direction for improved consistency. Since MSR consists of only 2D operations, it is more feasible in terms of GPU memory consumption and requires fewer training samples compared to 3D CNNs. Our experiments demonstrate that the proposed method outperforms traditional linear interpolation and baseline 2D/3D CNN-based approaches. We conclude by showcasing the method's practical utility in estimating brain volumes from under-sampled brain MR scans through semantic segmentation.



There are no comments yet.


page 2

page 6

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Magnetic resonance imaging (MRI) has been one of prevailing gold standards for diagnostic purposes. It is not only non-invasive, but also better at targeting different human tissues with specific contrasts that reveal the underlying anatomy. The main disadvantage of MRI compared to other medical imaging modalities (e.g. computed tomography, or CT) is its long acquisition time, which is governed by the duration of the frequency signals to be emitted by atoms and sampled by the machine. There has been a long history of studies on accelerating the MRI sampling process [19, 15, 16, 21] by undersampling in the 2D k-space during acquisition; however, only a relatively small number of studies [6, 7, 12, 18] focused on interpolating between the sampled slices.

In practice, most MR volumes are taken anisotropically with a high resolution within slices and a sparse resolution between slices. For example, Fig. 1 shows a brain MR scan whose axial direction is sparsely sampled. As a result, image quality suffers when viewing from coronal and sagittal directions.

Figure 1: The axial, coronal, and sagittal views of an anisotropic MR volume are fitted to isotropic resolution through (Left) linear interpolation and (Right) our proposed slice-interpolation method.

It is desirable to have a consistent resolution across all dimensions, both for visualization and for medical analysis tasks such as brain volume estimation. Traditionally, slice interpolation has been done with two groups of methods: intensity-based and deformation-based methods. Linear and cubic spline interpolation methods are classic examples of intensity-based methods that directly perform the interpolation based on the intensity of the adjacent slices. Deformation-based methods estimate deformation fields between adjacent slices, and then interpolate in-between pixels based on the estimated fields. However, these methods require that adjacent MR slices contain similar anatomical structures. That is, the structural change must be sufficiently small so that a dense pixel correspondence can be established between adjacent slices. When the anatomical variation between slices is significant, more sophisticated modeling approach is needed.

Recently, deep convolutional neural networks (DCNNs) have been outperforming traditional approaches on medical image analysis due to their ability to model complex variations within data [21, 20, 14]. For slice interpolation, DCNNs can be applied to learn a mapping from an anisotropic MR to isotropic. However, directly addressing the task in 3D is challenging due to the high memory consumption of 3D networks. In this work, we break down the task of 3D slice interpolation into a sequence of 2D problems to produce anatomically consistent slice interpolations while being memory-feasible. Specifically, we propose a novel marginal super-resolution to super-resolve isotropic views in the sagittal and coronal directions by a 2D CNN. The interpolation along the axial direction can be estimated by a fusion of the isotropic saggital and coronal views. Finally, the interpolated slices are processed to recover more details via refinement.

Our main contributions can be summarized as follows:

  • We propose a novel marginal super-resolution approach to break down the 3D slice interpolation problem into several 2D problems, which is more feasible in terms of GPU memory consumption and the amount of data available for training.

  • We propose a two-view fusion approach to incorporate the 3D anatomical structure. The interpolated slices after fusion achieve high structural consistency. The final refinement further recovers fine details.

  • We perform extensive evaluations on a large-scale MR dataset, and show that the proposed method outperforms all the competing CNN models, including 3D CNNs, in terms of quantitative measurement, visual quality, and brain matter segmentation.

2 Related Work

2.0.1 Traditional slice interpolation methods.

Early work on interpolating volumetric medical data dates back to 1992, when Goshtasby et al. [6] proposed to leverage the small and gradual anatomic differences between consecutive slices, and find correspondence between pixels by searching through small neighborhoods. A slew of methods were proposed in the subsequent years, focusing on finding more accurate deformation fields, including shape-based methods [7], morphology-based methods [12], registration-based methods [18], etc. Linear interpolation can be regarded as a special example, which essentially assumes no deformation between slices.

An important assumption made in the above-mentioned methods is that adjacent slices contain similar anatomical structures, i.e., the changes in the structures have to be sufficiently small such that a dense correspondence can be found between two slices. This assumption largely limits the applicability of slice interpolation methods especially when slices are sparsely sampled. Furthermore, these methods did not utilize the information outside the two adjacent slices.

2.0.2 Learning based super-resolution methods.

Slice interpolation can be viewed as a special case of 3D super-resolution. Here we review the literatures of 2D Single Image Super-Resolution (SISR), especially those approaches based on CNNs. Dong et al. [3] first proposed SRCNN, which learns a mapping that optimally transforms low-resolution (LR) images to high-resolution (HR) images. Many subsequent studies explored strategies to improve SISR such as using deeper archtectures and weight-sharing [9, 22, 10]. However, these methods require bilinear upsampling as a pre-processing step, which drastically increases computational complexity [4]. To address this issue, Dong et al. [4] proposed to apply deconvolution layers for LR image to be directly upsampled to finer resolution. Furthermore, many studies have shown that residual learning provided better performance in SISR [13, 11, 23]. Specifically, Zhang et al. [23] incorporated both residual learning and dense blocks [8], and introduced Residual Dense Blocks (RDB) to allow for all layers of features to be seen directly by other layers, achieving state-of-the-art performance.

Generative Adversarial Networks (GAN) [5] have also been incorporated in SISR to improve the visual quality of the generated images. Ledig et al. pointed out that training SISR networks solely by or loss intrinsically leads to blurry estimations, and proposed SRGAN [11], which generated much sharper and realistic images compared to other approaches, despite having lower peak signal to noise ratios.

Though available computation capacity has been increasing, 3D CNNs are still limited by memory capacity due to a considerable increase in the size of network parameters and input data. A common compromise is to extract small patches from 3D volume to reduce the input size [2]; however, this also limits the effective receptive field of the network. In practice, 3D CNNs are also limited by the amount of training data to ensure generalization.

3 Problem formulation and baseline CNN approaches

Let denote an isotropic MR volume. By convention, we refer the axis as the “sagittal” axis, the axis as the “coronal” axis, and the axis as the “axial” axis. Accordingly, there are three types of slices:

  • the sagittal slice for a given : ;

  • the coronal slice for a given : ;

  • the axial slice for a given : .

We also define a slab of slices, say along the axis, as


and are defined similarly. Without loss of generality, in this work we consider slice interpolation along the axial axis. From , the corresponding anisotropic MR volume is defined as


where is the sparsity factor. The goal of slice interpolation is to find a transformation that can optimally transform back to .

There are two possible baseline realizations of using CNNs:

  • 2D CNN. More in line with the traditional methods, a 2D CNN takes two adjacent slices and as inputs, and directly estimates the in-between missing slices. One major drawback of this approach is that a simple 2D CNN has limited capabilities of modeling the variations in highly anisotropic volumes.

  • 3D CNN. A 3D CNN is learned as a mapping from the sparsely sampled volume to a fully sampled volume . This straightforward approach, however, suffers from training memory issue and insufficient training data.

Below, we present our proposed algorithm that retains the advantages of the baseline CNN models discussed above while mitigating their disadvantages.

4 The Proposed Algorithm

We propose to break down the 3D slice interpolation problem into a series of 2D tasks, and interpolate the contextual information from all three anatomical views to achieve structurally consistent reconstruction and improved memory efficiency. The two stages are as follows:

  • Marginal super-resolution (MSR), where we provide high-quality estimates of the interpolated slices by extrapolating context from sagittal and coronal axes.

  • Two-view Fusion and Refinement (TFR), where we fuse the estimations and further refine with information from the axial axis.

Figure 2: Marginal Super-Resolution Pipeline.

4.1 Marginal Super-Resolution

Fig. 2 demonstrates the pipeline of MSR. Given , we view it as a sequence of 2D sagittal slices marginally from the sagittal axis. The same volume can also be treated as from the coronal axes. We make an observation that super-resolving to and to are equivalent to applying a sequence of 2D super-resolution along the axis and axis, respectively. Therefore, we apply a residual dense network (RDN) [23] to upsample and as follows:


Notice that instead of super-resolving 2D slices independently, we propose to take a slab of slices as input and estimate a single SR output. Using a larger allows more context to be modelled. The MSR process is repeated for all and . Finally, the super-resolved slices can be reformatted as sagittally and coronally super-resolved volumes, and , respectively. We apply the following loss to train the RDN:


where and in the isotropic MR volume.

From the axial perspective, and provide line-by-line estimations for the missing axial slices. However, since no constraint is enforced on the estimated axial slices, inconsistent interpolations lead to noticeable artifacts (See Section 5.4). We resolve this problem in the second TFR stage of the proposed pipeline.

4.2 Two-View Fusion and Refinement

Figure 3: Two-view Fusion Pipeline.

The TFR stage is the counterpart of MSR which further improves the quality of slice interpolation by learning the structural variations along the axial direction.

As shown in Fig. 3, we first resample the sagitally and coronally super-resovled volumes and from the axial direction to obtain and , respectively. A fusion network takes the two slices as inputs and combines information from the two views. The objective function for training the fusion network is:


where is the output of the fusion network, and in the isotropic MR volume. After training, the fusion network is applied to all the interpolated slices and , yielding an MR volume .

Figure 4: Refinement Pipeline.

After fusion, the interpolated slices already have visually pleasing qualities. Finally, to improve between-slice consistency along the axial axis, a refinement network takes a slab of slices as input and generates a consistent output slab . The size is selected as to make sure the refinement network has information from one or two observed slices. The pipeline is illustrated in Fig. 4

. The loss function is given by:


4.3 Comparison with baseline CNN approaches

A 2D CNN estimates missing slices solely based on adjacent MR scans. In contrast, the proposed MSR and TFR take into account the full context from sagittal, coronal, and axial views, thus providing strong estimates of the in-between slices. A 3D CNN directly maps a sparsely sampled MR volume to a fully sampled MR volume. Due to memory limitation, a volume often needs to be divided into small patches during training, which limits the effective receptive field of 3D CNNs. In the proposed method, interpolation in 3D space is treated as a sequence of 2D operations, which ensures that the networks can be trained without relying on patches, thus allowing full contextual information to be captured. Furthermore, there are sufficient samples to train 2D CNNs, which mitigates the problem of overfitting issue that plagues 3D CNNs.

5 Experiments

5.1 Settings

5.1.1 Implementation Details

We implement the proposed framework using PyTorch

111 The RDN [23] architecture with two RDBs are used as the building unit for our networks. For Fusion, Refinement, and baseline 2D CNN models, where the inputs and outputs have the same image size, we replace the upsampling network in RDN with one convolutional layer. The input to the MSR network has . Note that due to memory constraint, 3D CNN only uses one RDB. We train the models with Adam optimization, with a momentum of 0.5 and a learning rate of 0.0001, until they reach convergence.

5.1.2 Dataset

We employ 120 T1 MR brain scans from the publicly available Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. The MR scans are isotropically sampled at 1 mm 1 mm

1 mm, and zero-padded to

pixels, ending up with 30,720 slices in each of sagittal, coronal, and axial directions. We further down-sample the isotropic volumes by factors of and , yielding of sizes and , respectively. The data is split into training/validation/testing sets with 95/5/20 samples. Note that during test time, we only select slices that contain mostly brain tissues, the number of samples for each sparsity are presented in Table 1.

5.1.3 Evaluation metrics

We compare different slice interpolation approaches using two types of quantitative metrics. First, we use Peak Signal-to-Noise Ratio (PSNR) and Structured Similarity Index (SSIM) to measure low-level image quality. Second, we evaluate the quality of the interpolated slices through gray/white-matter segmentation. The segmentation network has a U-Net architecture, which is one of the winning models in MRBrainS challenge [1], and is trained on the OASIS dataset [17]. Dice Coefficient (DICE) and Hausdorff Distance (HD)222

To reduce the effect of outliers, HD is calculated on the 90th percentile displacement.

between the segmentation maps of ground truth slices and generated slices are calculated. Due to the memory limitation of 3D CNN, we can at most super-resolve a limited region of

pixels during evaluation. For fair comparisons, the evaluation metrics are calculated over the same region across all methods.

5.2 Quantitative Evaluations

In this section, we evaluate the performance of our method and the baseline approaches. Quantitative comparisons are presented in Table 1. We can observe that all the three CNN based methods have higher PSNR and SSIM than the widely used linear interpolation. 3D CNN slightly outperforms 2D CNN in 4x sparsity, but performs worse in 8x sparsity. Among the three CNN methods, our method consistently outperforms 2D CNN and 3D CNN baselines.

The performance gain in accurately segmenting gray and white matters is large from linear interpolation to baseline CNN-based methods. However, at 8x sparsity, the HD scores of linear interpolation are comparable with 2D CNN and 3D CNN, while our method outperforms these approaches by at least 10. This demonstrates the robustness of our method even at very high sparsity.

Sparsity Method PSNR(dB) SSIM DICE HD(90th pct.)
4 LI 26.39 0.8317 0.7716/0.7296 3.607/7.965
2D CNN 31.24 0.9313 0.8813/0.8334 3.176/12.36
3D CNN 31.34 0.9292 0.8536/0.8265 2.898/7.373
Ours 32.22 0.9441 0.9021/0.8593 2.494/6.240
8 LI 23.45 0.7165 0.6611/0.6105 4.487/10.59
2D CNN 27.88 0.8444 0.7783/0.7425 4.322/12.84
3D CNN 27.38 0.8390 0.7684/0.7468 4.583/9.017
Ours 28.87 0.8808 0.8189/0.7828 3.960/8.127
Table 1: Quantitative evaluations for different slice interpolation approaches. For DICE and HD performance metrics, we present results on gray matter (GM)/white matter (WM) segmentation. The best results are in bold and the second best underlined.
(a) 27.37/0.8465
(b) 32.34/0.9441
(c) 32.72/0.9436
(d) 34.11/0.9607
(f) 25.51/0.7681
(g) 28.29/0.8205
(h) 29.51/0.8824
(i) 31.87/0.9249
Figure 5: Visual comparisons of slice interpolation approaches. For 4x sparsity, the second of three interpolated MR slices is presented. For 8x sparsity, the third of seven interpolated slices is presented.

5.3 Visual Comparisons

In Fig. 5, we present the observed slices and along with the interpolated slices produced by different methods. Specifically we demonstrate the second of three interpolated MR slices for 4x sparsity, and the third of seven interpolated slices for 8x sparsity. We highlight the region where the anatomical structures significantly change compared to the observed slices and . We observe that although 2D CNN has comparable performance in terms of PSNR and SSIM, it tends to produce false anatomical structures in the zoomed regions. 3D CNN is able to resolve more accurate details. However, the improvement is quite limited, which we attribute to the fact that 3D CNN requires more training MR volumes in order to generalize and has smaller receptive field due to patch-based training. Our method benefits from the large receptive field of 2D CNN and two-view fusion, which not only produces sharper images, but also correctly estimates brain anatomy. The sharp and accurate estimation is crucial in clinical applications such as diagnosing Alzheimer’s Disease by brain volume estimation.

(a) 0.6787/0.7972
(b) 0.8143/0.8776
(c) 0.8190/0.8714
(d) 0.8664/0.9085
(e) GM/WM
(f) 0.6808/0.7161
(g) 0.8103/0.8631
(h) 0.7950/0.8606
(i) 0.8598/0.9115
(j) GM/WM
(k) 0.5139/0.7240
(l) 0.6619/0.8224
(m) 0.6878/0.8584
(n) 0.7798/0.8853
(o) GM/WM
(p) 0.5910/0.6947
(q) 0.6516/0.8021
(r) 0.6507/0.8186
(s) 0.7471/0.8540
(t) GM/WM
Figure 6: Visual comparison of gray matter (Green)/white matter (Blue) segmentation over different methods, with respective DICE scores listed under the images.

In Fig. 6, we demonstrate the advantage of the proposed method in brain matter segmentation. It is clear that although 2D and 3D CNN generates visually plausible interpolation as presented in Fig. 5, the brain matters are easily misclassified due to incorrect anatomical structures and blurred details.

5.4 Ablation study

In this section, based on 4x sparsity, we evaluate the effectiveness of each proposed components. The following settings are considered:

  • MSR: Slice interpolation based on only sagittal view MSR. We consider number of input slices .

  • MSR: Slice interpolation based on only coronal view MSR. We consider number of input slices .

  • Fused: Slice interpolation with fusion network. Inputs to the network are MSR and MSR.

  • Refined: The proposed full pipeline.

Stage PSNR (dB) SSIM
baseline 2D CNN 31.24 0.9313
baseline 3D CNN 31.34 0.9292
MSR 30.28 0.9129
MSR 30.56 0.9178
MSR 31.43 0.9314
MSR 31.61 0.9339
Fused 32.02 0.9413
Refined 32.22 0.9441
Table 2: Quantitative ablation study. Baseline numbers are also included for comparison. The best results are in bold and the second best underlined.
(a) GT
(b) MSR
(c) MSR
(d) Fused
(e) GT (zoomed)
(f) MSR
(g) MSR
(h) Refined
Figure 7: Visual comparison for the proposed components.

From Table 2, it is clear that each proposed component improves the quality of slice interpolation. Notice that even without fusion and refinement, the axial slices interpolated by MSR and MSR are already better than the baseline 2D/3D CNNs.

Visual comparisons are shown in Fig. 7, where we select a challenging slice with abundant anatomical details. From Fig. 7, it is clear that marginally super-resolving axial slices from coronal and sagittal views leads to noticeable horizontal (MSR) and vertical (MSR) artifacts. Furthermore, some small details are better resolved by MSR, while others are better resolved by MSR. The fusion network combines the features from MSR and MSR, which effectively reduces inconsistency. With the additional axial information, the fused slice is then further improved by the refinement network.

In addition to loss, we also experiment on GAN loss at refinement stage. However, we find that GAN tends to generate fake anatomical details, which is undesirable in medical applications.

6 Conclusion

In this work, we proposed a multi-stage 2D CNN framework called deep slice interpolation. This framework allows us to recover missing slices with high quality, even when the distance between observed slices are sparsely sampled. We evaluated our approach on a large ADNI dataset, demonstrating that our method outperforms possible 2D/3D CNN baselines both visually and quantitatively. Furthermore, we have illustrated that the MR slices estimated by the proposed method have superior segmentation accuracy. In the future, we plan to investigate the potential application of the proposed framework on real screening MRI which often have a very low slice density.


  • [1] Z. Akkus, A. Galimzianova, A. Hoogi, D. L. Rubin, and B. J. Erickson (2017) Deep learning for brain MRI segmentation: state of the art and future directions. J. Digital Imaging 30 (4), pp. 449–459. External Links: Link, Document Cited by: §5.1.3.
  • [2] Y. Chen, F. Shi, A. G. Christodoulou, Z. Zhou, Y. Xie, and D. Li (2018) Efficient and accurate MRI super-resolution using a generative adversarial network and 3d multi-level densely connected network. CoRR abs/1803.01417. External Links: Link, 1803.01417 Cited by: §2.0.2.
  • [3] C. Dong, C. C. Loy, K. He, and X. Tang (2015) Image super-resolution using deep convolutional networks. CoRR abs/1501.00092. External Links: Link, 1501.00092 Cited by: §2.0.2.
  • [4] C. Dong, C. C. Loy, and X. Tang (2016) Accelerating the super-resolution convolutional neural network. CoRR abs/1608.00367. External Links: Link, 1608.00367 Cited by: §2.0.2.
  • [5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial networks. CoRR abs/1406.2661. External Links: Link, 1406.2661 Cited by: §2.0.2.
  • [6] A. A. Goshtasby, D. A. Turner, and L. V. Ackerman (1992) Matching of tomographic slices for interpolation. IEEE Trans. Med. Imaging 11 (4), pp. 507–516. External Links: Link, Document Cited by: §1, §2.0.1.
  • [7] G. J. Grevera and J. K. Udupa (1996) Shape-based interpolation of multidimensional grey-level images. IEEE Trans. Med. Imaging 15 (6), pp. 881–892. External Links: Link, Document Cited by: §1, §2.0.1.
  • [8] G. Huang, Z. Liu, and K. Q. Weinberger (2016) Densely connected convolutional networks. CoRR abs/1608.06993. External Links: Link, 1608.06993 Cited by: §2.0.2.
  • [9] J. Kim, J. K. Lee, and K. M. Lee (2015) Accurate image super-resolution using very deep convolutional networks. CoRR abs/1511.04587. External Links: Link, 1511.04587 Cited by: §2.0.2.
  • [10] J. Kim, J. K. Lee, and K. M. Lee (2016) Deeply-recursive convolutional network for image super-resolution. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016

    pp. 1637–1645. External Links: Link, Document Cited by: §2.0.2.
  • [11] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi (2017) Photo-realistic single image super-resolution using a generative adversarial network. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 105–114. External Links: Link, Document Cited by: §2.0.2, §2.0.2.
  • [12] T. Lee and W. Wang (2000) Morphology-based three-dimensional interpolation. IEEE Trans. Med. Imaging 19 (7), pp. 711–721. External Links: Link, Document Cited by: §1, §2.0.1.
  • [13] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee (2017) Enhanced deep residual networks for single image super-resolution. CoRR abs/1707.02921. External Links: Link, 1707.02921 Cited by: §2.0.2.
  • [14] S. Liu, D. Xu, S. K. Zhou, T. Mertelmeier, J. Wicklein, A. K. Jerebko, S. Grbic, O. Pauly, W. Cai, and D. Comaniciu (2017) 3D anisotropic hybrid network: transferring convolutional features from 2d images to 3d anisotropic volumes. CoRR abs/1711.08580. External Links: Link, 1711.08580 Cited by: §1.
  • [15] M. Lustig, D. Donoho, and J. M. Pauly (2007) Sparse mri: the application of compressed sensing for rapid mr imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine 58 (6), pp. 1182–1195. Cited by: §1.
  • [16] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty (2008) An efficient algorithm for compressed MR imaging using total variation and wavelets. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26 June 2008, Anchorage, Alaska, USA, External Links: Link, Document Cited by: §1.
  • [17] D. S. Marcus, T. H. Wang, J. Parker, J. G. Csernansky, J. C. Morris, and R. L. Buckner (2007) Open access series of imaging studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. J. Cognitive Neuroscience 19 (9), pp. 1498–1507. External Links: Link, Document Cited by: §5.1.3.
  • [18] G. P. Penney, J. A. Schnabel, D. Rueckert, M. A. Viergever, and W. J. Niessen (2004) Registration-based interpolation. IEEE Trans. Med. Imaging 23 (7), pp. 922–926. External Links: Link, Document Cited by: §1, §2.0.1.
  • [19] S. Ravishankar and Y. Bresler (2011) MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans. Med. Imaging 30 (5), pp. 1028–1041. External Links: Link, Document Cited by: §1.
  • [20] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Proceedings, Part III, pp. 234–241. External Links: Link, Document Cited by: §1.
  • [21] J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, and D. Rueckert (2018) A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37 (2), pp. 491–503. External Links: Link, Document Cited by: §1, §1.
  • [22] K. Zhang, W. Zuo, S. Gu, and L. Zhang (2017) Learning deep CNN denoiser prior for image restoration. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 2808–2817. External Links: Link, Document Cited by: §2.0.2.
  • [23] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu (2018) Residual dense network for image super-resolution. CoRR abs/1802.08797. External Links: Link, 1802.08797 Cited by: §2.0.2, §4.1, §5.1.1.