Precise segmentation of medical images delineates different anatomical structures and abnormal tissues throughout the body, which can be utilized for clinical diagnosis, treatment planning, etc
. With sufficient well-annotated data, deep convolutional neural networks (DCNNs) achieved ground-breaking performance in such segmentation tasks[36, 33, 19]. However, obtaining 3D annotations of medical images for fully supervised training of DCNNs is labor-intensive and error-prone. Therefore, DCNN-based segmentation methods that require only one or few examples of annotation for training are highly desirable to enable efficient development and deployment of practical solutions.
The lack of large-scale real-world annotations is a long-standing problem in medical image segmentation. Before the era of deep learning, a large body of literatures on medical image segmentation focused on the atlas-based segmentation [6, 35, 29, 30, 42, 7]. The key idea is that one or more labelled reference volumes (i.e., atlas) are non-rigidly registered [6, 35] to a target volume, or provided to learn patch-wise corresponding relationship [42, 7] with the target volume, and then the labels of the atlases are propagated to the target volume as segmentation. An intriguing characteristic of the atlas-based methods is that they only need one or several annotated data, naturally matching the recently rising concept of few-shot learning in deep learning. Classical state-of-the-art (SOTA) atlas-based segmentation methods [42, 7] rely on abundant texture features of local descriptors. Powered by the convolutional operations repeatedly conducted in local regions, DCNNs are especially good at extracting multi-scale local semantic features. Therefore, it is intuitive and appealing to apply DCNNs to develop advanced atlas-based methods for medical image segmentation.
Recent studies [46, 28, 13, 44, 10, 45, 40] showed that the principle of classical atlas-based segmentation could be implemented with DCNNs, and decent performance was achieved. Among all those works, two of them are specifically related to ours in regard to one-shot learning [46, 44]. In the first work, Zhao et al.  proposed to learn a set of spatial and appearance transformations from the atlas to unlabelled images. By applying randomly combined spatial and appearance transformations to different unlabelled images, the model could synthesize a diverse set of labelled data. In this sense, it provided extra labelled data for training of the segmenter. One of the limitations of this work is that the segmentation accuracy was indirectly boosted by data augmentation, resulting in extra overhead for training the networks responsible for learning both kinds of transformations. The second work  proposed a framework that jointly trained two networks for image registration and segmentation, assuming that these two tasks would help each other since they were highly related. However, in many clinical scenarios, registration is not required when segmentation is demanded.
Different from these two works, we propose to directly imitate the classical atlas-based segmentation with a deep learning framework, which takes both the atlas and the target image as input, and predicts the correspondence map from the former to the latter. In this way, the segmentation label can be transferred from the atlas to the unlabelled target image with the predicted correspondence. For efficient learning of the correspondence, we enhance the backbone of VoxelMorph  via the addition of a discriminator network and adversarial training .
Learning correspondence plays an important role in many computer vision tasks, e.g., optical flow[31, 43], tracking [34, 23], patch matching, , registration [3, 13, 28, 44], and so on. Among those inspiring works, forward-backward consistency is widely used in the correspondence problem. Specifically, our framework learns not only the forward correspondences from the atlas to unlabelled images, but also the backward correspondences from the warped atlases back to the original atlas (see Fig. 1). The introduction of the reverse correspondences naturally complements the full cycle of bidirectional warping, enabling extra, cycle-consistency-based supervision signals to make the learning process with only one annotation more robust and meanwhile preserve the anatomical consistency. In addition, we impose supervision in three involved spaces, namely, the image space, the transformation space, and the label space, which has been verified effective in our experiments.
In summary, we propose a label transfer network (LT-Net) to propagate the segmentation map from the atlas to unlabelled images by learning the reversible voxel-wise correspondences. Our main contributions are as follows:
To deal with the lack of annotations, our method addresses the one-shot segmentation problem by resorting to the idea of classical atlas-based segmentation. Powered by the representation ability of the DCNN, the proposed method boosts the performance of image matching in feature space, providing anatomically meaningful correspondence for the label transfer.
We extend correspondence learning to our one-shot segmentation framework in an end-to-end manner, where forward-backward cycle-consistency takes an important role to provide extra supervision in the image, transformation, and label spaces.
We demonstrate the superiority of our method over both deep learning-based one-shot segmentation methods [46, 3] and a classical multi-atlas segmentation method  in segmenting 28 anatomical structures from a brain magnetic resonance imaging (MRI) dataset. We also demonstrate the benefits of the cycle-consistency supervision in each individual space via ablation studies.
2 Related Work
based on the assumption that previously learned categories could be leveraged to help forecast a new category when very few examples are available. Along the years, this concept has been used in various branches of machine learning and computer vision problems, such as imitation learning[12, 16], object segmentation [38, 4, 32, 37], neural architecture search [47, 11], and so on. Most recently, Zhao et al.  developed a one-shot medical image segmentation framework based on data augmentation using learned transformations from the reference atlas to unlabelled images. Specifically, both the spatial and appearance transformation models were learned and then utilized to synthesize additional labelled samples for data augmentation. Our work also explores the one-shot setting for medical image segmentation to alleviate the burden of manual annotation. The main difference is that we directly target the segmentation in our network design, and incorporate the forward-backward consistency in the framework to ensure abundant supervision for learning.
Atlas-based segmentation: Atlas-based segmentation is a classical topic in medical image analysis, evolved from single atlas-based [35, 46, 28, 13, 44, 10] to sophisticated, multi-atlas-based methods [26, 18, 42, 10, 45, 40, 22]. Recently, motivated by the success of DCNNs, researchers revitalized this classical concept with deep learning models. Using a single atlas, researchers explored this methodology in three ways: learning transformations for data augmentation , combining with another registration task [28, 13, 44], and learning a deformation field to resample an initial binary mask . Whereas for multiple atlases, recent works attempted to implement key components of multi-atlas segmentation with DCNNs, e.g., atlas selection , label propagation , and label fusion [45, 9].
Our work approaches the one-shot medical image segmentation problem via single atlas-based segmentation with a correspondence-learning generative adversarial network (GAN) framework. It falls into the single-atlas category, which offers two advantages. First, for complex organs, e.g., the brain, to annotate extra few samples in detail can be a considerable burden. Second, there is no need to consider the intricate processes involved in the multi-atlas approach, such as label fusion or atlas selection. In spite of relying on a single atlas, our proposed framework outperforms an advanced multi-atlas method  using up to five atlases (cf. Section 6.4).
Correspondence in computer vision: Correspondence plays an important role in computer vision. Actually, many fundamental vision problems, from optical flow [31, 43] and tracking [34, 23] to patch matching  and registration [3, 13, 28, 44], require some notion of visual correspondence . Optical flow and registration can be seen as pixel/voxel-level correspondence problems, whereas tracking and patch matching can be seen as patch-level correspondence problems. By treating atlas-based segmentation as a correspondence problem, we draw lessons from these research areas to guide the design of our framework.
Forward-backward consistency has been widely adopted in many computer vision problems, especially in the correspondence learning problem. For example, forward-backward consistency has been the evaluation metric as well as the measure of uncertainty 
for tracking. Recent methods on unsupervised optical flow estimation[31, 43]
employed forward and backward consistency to define an occluded region, which was excluded for training. Besides, forward-backward consistency is an important building block for CycleGAN, which is the most popular framework for image-to-image translation. To the best of our knowledge, our work is the first to employ cycle-consistency in one-shot atlas-based segmentation within a deep learning framework.
3 Basic Framework for Correspondence Learning
Preliminaries: We first recap the basic concept of atlas-based image segmentation, where the segmentation of an unseen subject can be estimated by a registration process. Let be a labelled image pair, where is the atlas image, is its corresponding segmentation map, and are the numbers of voxels along the coronal, sagittal, and axial directions, respectively. In practice, input images are defined within a 3D space , which also applies to the unlabelled image pool . In the following, we use to denote an unlabelled image for an uncluttered notion. Let ( standing for forward is used to differentiate the backward operations introduced in Section 4) denote the forward correspondence map that warps towards during the registration process. Specifically, can be considered as a spatially varying function defined over , that maps coordinates of to those of
by displacement vectors. We useto denote the application of on (i.e., warp towards according to ):
where is a warp operation, and is the deformed atlas. The segmentation map can be warped the same way as the atlas:
where is the synthetic segmentation of . If registers and well, is expected to be an accurate segmentation of . In this sense, we treat the atlas-based segmentation as a label transfer process.
Atlas-based segmentation with deep learning: To model the registration function with a DCNN, a generator network is often adopted to match the local spatial information between and , and output . For example, VoxelMorph  used a U-Net  asand the transformation smoothness loss . To introduce robustness against global intensity variations in medical images, we use the locally normalized cross-correlation (CC) loss [44, 46] for , which encourages coherence in local regions. For , it is formulated with first-order derivatives of :
where iterates over all spatial locations in , and we approximate with spatial gradient differences between neighboring voxels along directions . Minimizing encourages to approximate , whereas minimizing regularizes to be smooth. In addition, smoothness regularization can be considered as a strategy to alleviate the overfitting problem while encoding the anatomical priori.
Auxiliary GAN loss: Besides the two basic losses used by VoxelMorph, we introduce a GAN  into our basic framework to offer additional supervision. The GAN subnet in our framework comprises and a discriminator network . A vanilla GAN would make the discriminator differentiate from the true underlying correspondence map. In practice, however, it is usually infeasible to obtain the true correspondence between a pair of clinical images. Instead, we make distinguish the synthetic image from . In this sense, serves as a delegate of , and is trained to generate that can be used to synthesize authentically enough to confuse ; meanwhile, becomes more skilled at flagging synthesized images. This delegation strategy provides indirect supervision to and , and allows the networks to be trained end-to-end with a large number of unlabelled images. Consequently, the image adversarial loss is defined as
where and are trained alternatively to compete in a two-player min-max game with the objective function . 222Our basic framework for correspondence learning is illustrated and explained in more details in the Appendix.
4 Learning Reversible Cycle Correspondence
In the previous section, we introduce our baseline method for atlas-based one-shot segmentation. Our proposed framework is built on this baseline and adds a cycle consistency constraint to further boost the segmentation performance. We name the proposed framework as label transfer network (LT-Net). Unlike previous works [46, 44] that only learned the forward correspondences from the atlas to unlabelled images, we in addition learn the backward correspondences from the warped atlases back to the original atlas. As far as the authors are aware of, this work is the first attempt that utilizes the cycle correspondence for one-shot (atlas-based) segmentation with deep learning. Specifically, we propose a backward correspondence learning path: , where is the backward correspondence map, and is the backward generator (see and in Fig. 2). With the newly added , we can revert the synthetic image to reconstruct the atlas using the warp operation:
and we call the reconstructed atlas. Accompanied with the backward learning path, a straightforward addition to the network’s supervision is to impose transformation smoothness loss on as well. Hence, our complete transformation smoothness loss becomes
More importantly, the completion of the correspondence cycle enables a variety of supervision signals to boost the performance upon unidirectional correspondence learning. Concretely, we propose four novel, cycle-consistency-driven supervision losses (cf. the supervision blocks in Fig. 2) in three spaces, namely, the image space, the transformation space, and the label space. These supervision losses are all devised by straightforward intuitions, as described below.
In the image space, the reconstructed and original atlas images ( and ) should be the same (the image consistency).
In the transformation space, conceptually the forward and backward warpings should be the inverse function of each other, so that the atlas warped toward the unlabelled image can be warped back to what it originally is (the transformation consistency).
Lastly, in the label space, the true segmentation and the reconstructed segmentation should be the same (the anatomy consistency). In addition, they must differ from the synthetic segmentation in the same way (the anatomy difference consistency).
Despite being conceptually simple, the comprehensive inclusion and combination of these supervision signals in our framework are proved to be effective in the experiments—our LT-Net outperforms the current SOTA by significant margins, and the ablation studies demonstrate benefits of the supervision in individual spaces. In the following, we describe each loss in detail.
Cycle-consistency supervision in image space: Enabled by our novel forward-backward cycle correspondence learning framework, we can revert the synthetic image to reconstruct the atlas. We employ an L1 loss to enforce the consistency between the true atlas and the reconstructed one, which is defined as
Cycle-consistency supervision in transformation space: In terms of forward-backward consistency, the correspondences should be reversible, meaning that a voxel warped from one position to another in the forward path should be warped back to its original position in the backward path. Therefore, we define a transformation consistency loss to enforce this constraint as
Cycle-consistency supervision in label space: In many applications, matching the images solely based on intensity is under-constrained and may lead to wrong correspondences. The corresponding anatomical structure may shift or twist away from one position to another, as long as the warped and target images appear similar. Enforcing smoothness constraint on the correspondence map (as in VoxelMorph ) is a common way of alleviating this problem. In this work, we further explore driving forces in the label space to guide the correspondence learning towards an anatomically meaningful direction.
When considering supervision signal in the label space, an anatomy cycle-consistency constraint naturally comes up within our framework. Let denote the reconstructed segmentation map of . To model the dissimilarity between and the original segmentation map , a Dice loss  is adopted which is defined as
Since our target is to learn the correspondence which can be used to transfer the segmentation map of the atlas to each of the unlabelled images, we also propose an anatomy difference consistency loss to indirectly regularize quality of the synthetic segmentation map . As aforementioned, this loss is based on a simple intuition: the anatomy differences between the atlas and the unlabelled image in the forward and backward paths should be cyclically consistent in the label space. The loss is thus formulated as
5 Optimization Objective and Implementation
Given the definitions of the supervision signals above, our complete objective for optimization is defined as
where and are the weights to balance the importance of the different losses. We use the same weight for the last four losses in Eq. (11), since they are comparable in magnitude and we find the results insensitive to their relative weights in our primitive experiments. We set following CycleGAN , and consequently set to make the corresponding loss values at the same level as . The supervision signals in the image, transformation, and label spaces affect each other and restrict each other, pushing the learning system towards an anatomically meaningful direction.
We implement all models using Keras
with a TensorFlow backend. For the generator networks in both the forward and backward paths, we adopt the same 3D U-Net architecture as VoxelMorph  for a fair comparison later. For the discriminator network, we use an extended 3D version of PatchGAN  to determine whether an image patch is real or synthesized. All networks are optimized from scratch using the Adam solver . The learning rate is initialized to and remains unchanged during the training process. Each mini-batch processes a pair of volumes (one atlas and one unlabelled image) per GPU while running two Tesla P40 GPUs in parallel. During testing, the forward correspondence map from the atlas to a test unlabelled image is predicted by , then the segmentation map for is produced with Eq. (2).
We demonstrate the superiority of our LT-Net on the task of brain MRI segmentation from different perspectives. Above all, the effectiveness of the cycle correspondence learning framework is evaluated (Section 6.2). As aforementioned, the forward-backward consistency is a classical constraint in correspondence problems. By introducing a backward correspondence path, extra meaningful supervision signals can be exploited to drive the learning process towards a more robust and anatomically meaningful direction. Hence, within the cycle correspondence framework, we subsequently examine the effects of the several newly proposed cycle consistency losses with ablation studies: the transformation consistency loss in the transformation space, the anatomy consistency and difference consistency losses in the label space, and the combination of the losses from both spaces (Section 6.3). Next, we compare our method with a classical multi-atlas method (Section 6.4), demonstrating that the traditional idea of atlas-based segmentation in computer vision can be further boosted using deep learning. Finally, we compare with a SOTA method for one-shot medical image segmentation, demonstrating the superiority of our framework over other DCNN-based methods (Section 6.5). Examples of the synthesized images and warped segmentation maps for unlabelled images are also presented for visual evaluation.
6.1 Dataset and Evaluation Metric
Dataset: We use a publicly available dataset from the Child and Adolescent NeuroDevelopment Initiative (CANDI) at the University of Massachusetts Medical School described in . The dataset comprises 103 T1-weighted MRI scans (57 males and 46 females) with anatomic segmentation labels. The subjects come from four diagnostic groups: healthy controls, schizophrenia spectrum, bipolar disorder with psychosis, and bipolar disorder without psychosis. We use 28 anatomical structures (Table 1) that were used in VoxelMorph . The volume size ranges from to voxels. For computation efficiency, we crop a region around the center of the brain, which is large enough to contain the whole brain. We randomly select 20 volumes as test data, and use the others for training. Among the training data, the volume which is most similar to the anatomical average is selected as the only annotated atlas (the same strategy as adopted in VoxelMorph ).
Evaluation metric: We use the Dice similarity coefficient  to evaluate the segmentation accuracy of each model, which measures the overlap between manual annotations and predicted results.
6.2 Effectiveness of Forward-backward Consistency
|+ +||79.2 (2.8)||72.7||82.1|
Mean Dice scores (%) (with standard deviations in parentheses) for VoxelMorph, and its extended versions which gradually incorporate the image adversarial loss and cycle-consistency loss . Min and Max represent the minimum and maximum Dice scores (%) in the test dataset.
We adapt VoxelMorph —the SOTA DCNN registration model—for our problem setting and use it as the initial performance baseline. Specifically, we train it as a single-atlas model to learn the forward correspondence, and warp the atlas’s segmentation map according to the learned correspondence for each unlabelled image. As introduced in Section 3, our basic framework adopts the same backbone as VoxelMorph for forward correspondence learning, but adds a GAN for additional supervision. Then, built on top of the basic framework, our LT-Net introduces a backward correspondence learning path to form a complete cycle correspondence framework. Enabled by the cyclic structure, we further add an image cycle-consistency loss between the atlas and the reconstructed one. For detailed comparisons, we first add alone, and then together.
The quantitative comparison results are shown in Table 2. The results show that and improvements are achieved when gradually adding and . This indicates that can boost the performance for our correspondence learning problem, which is in accordance with the practical experience that image adversarial losses usually perform well in image-to-image translation tasks. It is worth noting that does not bring substantial further improvement upon the GAN setting. However, as we have mentioned earlier and will experimentally show next, the benefit of the cycle design is that it enables us to incorporate extra supervision signals to the learning framework, which can further improve the performance. Next, we treat the VoxelMorph backbone plus and as a new baseline, and design experiments to examine the effectiveness of the newly proposed supervision signals.
6.3 Ablation Study on the Supervision Signals
|+ +||81.4 (2.6)||74.4||83.8|
|+ +||82.3 (2.5)||75.6||84.2|
Cycle-consistency in transformation space: The correspondence learned from the atlas to the unlabelled image is used to synthesize by warping the atlas in the forward path, whereas in the backward path another correspondence is learned from back to the atlas. The forward and backward correspondences should be cycle-consistent. We conduct an ablation study with respect to the transformation consistency loss and show the results in Table 3. From the table, we can observe that brings a improvement compared to the baseline. This may imply that intensity matching at the image level—despite the cycle correspondence setting—is not enough to prevent the overfitting by DCNNs, and the introduction of supervision in other spaces (e.g., the label space) has the potential for further improvement in performance.
|No. of Atlases||Mean (std)||Min||Max|
Cycle-consistency in label space: With the forward correspondence, the segmentation map of the atlas can be warped to synthesize the segmentation map for each unlabelled image. Inversely, the synthetic segmentation map can be warped back to restore the segmentation map of the atlas using the backward correspondence. Ideally, the segmentation maps of the atlas before and after the dual warping should be the same, and we enforce this constraint with the anatomy consistency loss . Table 3 quantitatively displays the effect of this supervision. We can observe that brings a improvement when compared with the baseline, and an extra improvement when further combined with the transformation consistency loss. As expected, boosts the performance, since it can ensure the integrity and internal coherence of the anatomical structure.
The anatomy cycle-consistency loss does not consider the middle-cycle synthesized segmentation map for each unlabelled image, which, however, is the ultimate goal of our LT-Net. To place more emphasis on , the anatomy difference consistency loss is proposed in the label space to regularize the differences between the segmentation maps of the atlas and that of the unlabelled image. The results in Table 3 show that by indirectly regularizing the segmentation maps of the unlabelled images, we achieve a further improvement.
|U-Net (upper bound)||92.0||93.1||91.8||87.9||93.1||90.6||88.1||88.7||82.5||79.0||84.9||92.2||80.3||75.3||69.8||85.9|
6.4 Comparison with a Classical Multi-atlas Method
Traditional multi-atlas methods once achieved SOTA results for atlas-based segmentation. We compare our LT-Net with MABMIS , which consists of a tree-based groupwise registration method and an iterative groupwise segmentation method. The results are shown in Table 4. We can observe that our method using only one atlas outperforms MABMIS using up to five atlases. In addition, classical multi-atlas segmentation is notorious for being time-consuming. While MABMIS requires 14 minutes to segment one case with an Intel® Core i3-4150 CPU (using two atlases), our LT-Net only needs 4 seconds with a single Tesla P40 GPU.
6.5 Comparison with SOTA Methods
Besides VoxelMorph, we also compare our proposed LT-Net with DataAug , a SOTA method for one-shot medical image segmentation relying on registration-based data augmentation. In addition, we train a fully supervised U-Net  that uses a labelled training pool of 83 subjects. This setting serves as the upper bound for the one-shot segmentation methods. The results are shown in Table 6. Using only one annotated data for training, our framework achieves of the upper bound on the mean Dice score, yet with an apparently lower standard deviation. Besides, we can observe that our LT-Net outperforms both VoxelMorph and DataAug by margins of and , respectively. Table 5 shows the segmentation accuracy across various brain structures.
We visualize some example slices of the synthetic volumes from different patients in Fig. 3. We observe that the synthesized images are close to the unlabelled images in terms of the anatomical structures. In addition, Fig. 4 shows some example slices of brain structure annotations and segmentation maps predicted by U-Net, VoxelMorph, DataAug, and our proposed LT-Net. Compared to the other two one-shot methods, LT-Net predicts brain structures in a way that is more anatomically meaningful.
|U-Net (upper bound)||86.5 (6.3)||83.7||89.2|
In this study, we traced back to two classical ideas—atlas-based segmentation and correspondence—in computer vision and applied them to one-shot medical image segmentation with DCNNs. Firstly, we bridged the conceptual gap between atlas-based segmentation and the more generic idea of one-shot segmentation. This provided us with some critical thinkings for the design of our deep network. Secondly, we adopted the forward-backward consistency strategy from other correspondence problems, which subsequently enabled the design of a few novel supervision signals in three involved spaces (namely, the image space, the transformation space, and the label space) to make the learning well-supervised and effectively-guided. We hope this work would inspire the future development of one-shot learning for medical image segmentation in the era of deep learning.
This work was supported by the National Natural Science Foundation of China (Grant No. 61671399), the Fundamental Research Funds for the Central Universities (Grant No. 20720190012), the Key Area Research and Development Program of Guangdong Province, China (Grant No. 2018B010111001) and the Science and Technology Program of Shenzhen, China (No. ZDSYS201802021814180).
This supplementary document provides more details about the basic framework for correspondence learning, in addition to the concise description in Section 3.
The image similarity and transformation smoothness losses: As shown in Fig. 5, to implement atlas-based segmentation with deep convolutional neural networks (DCNNs), a generator network is employed to learn the correspondences from the atlas to unlabeled images, and two unsupervised loss functions—the image similarity loss and the transformation smoothness loss —are used to supervise the learning process. Minimizing encourages to approximate , whereas minimizing regularizes to be smooth.
To introduce robustness against global intensity variations in medical images caused by the differences in manufacturers, scanning protocols, and reconstruction methods, we adopt a locally normalized cross-correlation loss [44, 46] to formate that encourages local coherence, which has been proven to be highly effective in correspondence-related tasks [44, 46]. Let and denote the functions to calculate local mean intensities of the unlabeled volume and deformed atlas : and , where iterates over a cube around position in the volume, with in our experiments (the same as ). Then is defined as:
The smoothness constraint plays a key role in atlas-based segmentation methods [44, 46]; it is also widely used in other correspondence learning problems, such as optical flow estimation [31, 43] and stereo matching . In addition, smoothness regularization can be considered as a strategy to alleviate the overfitting problem while encoding the anatomical priori. Here, is formulated with the first-order derivative of :
where iterates over all spatial locations in , and we approximate with spatial gradient differences between neighboring voxels along directions :
The generative adversarial network (GAN) subnet: Besides and —which are pretty much the standard configuration in atlas-based segmentation problems  (e.g., they were used as the main losses in VoxelMorph ), we introduce a GAN  into our basic framework to offer additional supervision. The GAN subnet in our framework comprises and an additional discriminator network (see Fig. 5). A vanilla GAN would make the discriminator differentiate from the true underlying correspondence map. In practice, however, it is usually infeasible to obtain the true correspondence between a pair of clinical images. Instead, we make distinguish from . In this sense, serves as a delegate of , and is trained to generate that can be used to synthesize authentically enough to confuse ; meanwhile, becomes more skilled at flagging synthesized images. This delegation strategy provides indirect supervision to and , and allows the networks to be trained end-to-end with a large number of unlabelled images.
-  (2016) TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Cited by: §5.
CNN-based patch matching for optical flow with thresholded hinge embedding loss.
Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 3250–3259. Cited by: §1, §2.
-  (2019) VoxelMorph: A learning framework for deformable medical image registration. IEEE Trans. Medical Imaging. Cited by: 3rd item, §1, §1, §A, §A, §A, §2, §3, §4, §5, Figure 4, §6.1, §6.2, Table 2, Table 5, Table 6.
-  (2017) One-shot video object segmentation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 221–230. Cited by: §2.
-  (2015) Keras. GitHub. Note: https://github.com/fchollet/keras Cited by: §5.
-  (1995) Automatic 3-D model-based neuroanatomical segmentation. Human Brain Mapping 3 (3), pp. 190–208. Cited by: §1.
-  (2011) Patch-based segmentation using expert priors: application to hippocampus and ventricle segmentation. NeuroImage 54 (2), pp. 940–954. Cited by: §1.
-  (1945) Measures of the amount of ecologic association between species. Ecology 26 (3), pp. 297–302. Cited by: §6.1.
-  (2019) VoteNet: A deep learning label fusion method for multi-atlas segmentation. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 202–210. Cited by: §2.
-  (2019) Spatial warping network for 3D segmentation of the hippocampus in MR images. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 284–291. Cited by: §1, §2.
-  (2019) One-shot neural architecture search via self-evaluated template network. arXiv preprint arXiv:1910.05733. Cited by: §2.
-  (2017) One-shot imitation learning. In Advances in Neural Information Processing Systems, pp. 1087–1098. Cited by: §2.
-  (2019) Adversarial optimization for joint registration and segmentation in prostate CT radiotherapy. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 366–374. Cited by: §1, §1, §2, §2.
-  (2006) One-shot learning of object categories. IEEE Trans. Pattern Anal. Machine Intell. 28 (4), pp. 594–611. Cited by: §2.
-  (2003) A Bayesian approach to unsupervised one-shot learning of object categories. In Proc. Int’l Conf. Computer Vision, pp. 1134–1141. Cited by: §2.
-  (2017) One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905. Cited by: §2.
-  (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §1, §A, §3, Figure 5.
-  (2006) Automatic anatomical brain MRI segmentation combining label propagation and decision fusion. NeuroImage 33 (1), pp. 115–126. Cited by: §2.
-  (2019) Deep learning techniques for medical image segmentation: Achievements and challenges. Journal of Digital Imaging, pp. 1–15. Cited by: §1.
-  (2015) Multi-atlas segmentation of biomedical images: A survey. Medical Image Analysis 24 (1), pp. 205–219. Cited by: §A.
Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1125–1134. Cited by: §5.
-  (2012) Iterative multi-atlas-based multi-image segmentation with tree-based registration. NeuroImage 59 (1), pp. 422–430. Cited by: 3rd item, §2, §2, §6.4, Table 4.
-  (2010) Forward-backward error: Automatic detection of tracking failures. In 20th International Conference on Pattern Recognition, pp. 2756–2759. Cited by: §1, §2, §2.
-  (2011-10) CANDIShare: A resource for pediatric neuroimaging data. Neuroinformatics 10 (3), pp. 319–322. Cited by: §6.1, Table 1.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.
-  (2005) Mindboggle: Automated brain labeling with multiple atlases. BMC Medical Imaging 5 (1), pp. 7. Cited by: §2.
-  (2019) Bridging stereo matching and optical flow via spatiotemporal correspondence. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1890–1899. Cited by: §A.
-  (2019) A hybrid deep learning framework for integrated segmentation and registration: Evaluation on longitudinal white matter tract changes. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 645–653. Cited by: §1, §1, §2, §2.
-  (2002) Atlas-based segmentation and tracking of 3D cardiac MR images using non-rigid registration. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 642–650. Cited by: §1.
-  (2010) Fast and robust multi-atlas segmentation of brain magnetic resonance images. NeuroImage 49 (3), pp. 2352–2365. Cited by: §1.
UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In
Proc. AAAI Conf. Artificial Intelligence, pp. 7251–7259. Cited by: §1, §A, §2, §2, §4.
-  (2018) One-shot instance segmentation. CoRR abs/1811.11507. Cited by: §2.
-  (2016) V-Net: fully convolutional neural networks for volumetric medical image segmentation. In Fourth International Conference on 3D Vision (3DV), pp. 565–571. Cited by: §1, §4.
-  (2009) Recurrent tracking using multifold consistency. In Proceedings of the Eleventh IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, Cited by: §1, §2, §2.
-  (2000) Current methods in medical image segmentation. Annual Review of Biomedical Engineering 2 (1), pp. 315–337. Cited by: §1, §2.
-  (2015) U-Net: convolutional networks for biomedical image segmentation. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 234–241. Cited by: §1, §3, Figure 4, §6.5, Table 5, Table 6.
-  (2019) Fully convolutional one-shot object segmentation for industrial robotics. CoRR abs/1903.00683. Cited by: §2.
-  (2017) One-shot learning for semantic segmentation. arXiv preprint arXiv:1709.03410. Cited by: §2.
-  (2014) A quantitative analysis of current practices in optical flow estimation and the principles behind them. Int. J. Computer Vision 106 (2), pp. 115–137. Cited by: §4.
-  (2018) AtlasNet: multi-atlas non-linear deep networks for medical image segmentation. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 658–666. Cited by: §1, §2.
-  (2019) Learning correspondence from the cycle-consistency of time. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 2566–2576. Cited by: §2.
-  (2019) Patch-wise label propagation for MR brain segmentation based on multi-atlas images. Multimedia Systems 25 (2), pp. 73–81. Cited by: §1, §2.
-  (2018) Occlusion aware unsupervised learning of optical flow. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 4884–4893. Cited by: §1, §A, §2, §2, §4.
DeepAtlas: joint semi-supervised learning of image registration and segmentation. In Proc. Int’l Conf. Medical Image Computing and Computer Assisted Intervention, pp. 420–429. External Links: Cited by: §1, §1, §A, §A, §2, §2, §3, §4.
-  (2018) Neural multi-atlas label fusion: application to cardiac MR images. Medical Image Analysis 49, pp. 60–75. Cited by: §1, §2.
-  (2019) Data augmentation using learned transformations for one-shot medical image segmentation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 8543–8553. Cited by: 3rd item, §1, §A, §A, §2, §2, §3, §4, Figure 4, §6.5, Table 5, Table 6, footnote 1.
-  (2019) One-shot neural architecture search through a posteriori distribution guided sampling. CoRR abs/1906.09557. Cited by: §2.
-  (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. Int’l Conf. Computer Vision, pp. 2223–2232. Cited by: §2, §5.