Medical image segmentation is an important pre-requisite of computer aided diagnosis (CAD) which has been applied in a wide range of clinical applications. With the emerging of deep learning, great achievements have been made in this area. However, it remains very difficult to get satisfying segmentation for some challenging structures, which could be extremely small with respect to the whole volume, or vary a lot in terms of location, shape, and appearance. Besides, abnormalities, which results in huge change in texture, and anisotropic property (different voxel spacing) make the segmentation tasks even harder. Some examples are showed in Fig1.
Meanwhile, manually designing a high performance 3D segmentation network requires enough expertise. Most researchers are building upon existing 3D networks, such as 3D U-Net  and V-Net , with moderate modifications. In some case, an individual network is designed and only works well for certain task. To leverage this problem, Neural Architecture Search (NAS) technique is proposed in , which aims at automatically discovering better neural network architectures than human-designed ones in terms of performance, parameters, or computation cost. Starting from NASNet , many novel search spaces and search methods have been proposed [2, 9, 14, 15, 23]. However, only few works apply NAS on medical image segmentation [13, 32, 39], and they only achieve a comparable performance versus those manually designed networks.
Inspired by the successful handcrafted architectures such as ResNet  and MobileNet , many NAS works focus on searching for network building blocks. However, such works usually search in a shallow network while deploying with a deeper one. An inconsistency exists in network size between the search stage and deployment stage .  and  avoided this problem through activating only one path at each iteration, and  proposed to progressively reduce search space and enlarge the network in order to reduce the performance gap.
Nevertheless, when network topology is involved into the search space, things become more complex because no inconsistency is allowed in network size.  incorporated the network topology into search space and relieve the memory tensity instead with a sacrifice on batch size and crop size. However, on memory-costly tasks such as 3D medical image segmentation, the memory scarcity cannot be solved by lowering the batch size or cropping size, since they are already very small compared to those of 2D tasks. Reducing them to a smaller number would lead to a much worse performance and even failure on convergence.
To avoid the inconsistency on network size or input size between search stage and deployment stage, we propose a coarse-to-fine neural architecture search scheme for 3D medical image segmentation. In detail, we divide the search procedure into coarse stage and fine stage. In the coarse stage, the search is in a small search space with limited network topologies, therefore searching in a train-from-scratch manner is affordable for each network. Moreover, to reduce the search space and make the search procedure more efficient, we constrain the search space under inspirations from successful medical segmentation network designs: (1) U-shape encoder-decoder structure; (2) Skip-connections between the down-sampling paths and the up-sampling paths. The search space is largely reduced with these two priors. Afterwards, we apply a topology-similarity based evolutionary algorithm considering the search space properties, which makes the searching procedure focused on the promising architecture topologies. In the fine stage, the aim is to find the best operations inside each cell. Motivated by, we let the network itself choose the operation among 2D, 3D and pseudo-3D (P3D), so that it can capture features from different viewpoints. Since the topology is already determined by coarse stage, we mitigate the memory pressure in single-path one-shot NAS manner .
For validation, we apply the proposed method on ten segmentation tasks from MSD challenge  and achieve state-of-the-art performance. The network is searched using the pancreas dataset which is one of the largest dataset among the 10 tasks. Our result on the this proxy dataset surpasses the previous state-of-the-art by a large margin of 1% on pancreas and 2% on pancreas tumours. Then, we apply the same model and training/testing hyper-parameters across the others tasks, demonstrating the robustness and transfer-ability of the searched network.
Our contributions can be summarized into 3 folds: (1) we search a 3D segmentation network from scratch in a coarse-to-fine manner without sacrifice on network size or input size; (2) we design the specific search space and search method for each stage based on medical image segmentation priors; (3) our model achieves state-of-the-art performance on 10 datasets from MSD challenge, while showing great robustness and transfer-ability.
2 Related Work
2.1 Medical Image Segmentation
Deep learning based methods have achieved great success in natural image recognition , detection , and segmentation , and they also have been dominating medical image segmentation tasks in recent years. Since U-Net was first introduced in biomedical image segmentation , several modifications have been proposed.  extended the 2D U-Net to a 3D version. Later, V-Net  is proposed to incorporate residual blocks and soft dice loss.  introduced attention module to reinforce the U-Net model. Researchers also tried to investigate other possible architectures despite U-Net. For example, [26, 37, 38] cut 3D volumes into 2D slices and handle them with 2D segmentation network.  designed a hybrid network by using ResNet50 as 2D encoder and appending 3D decoders afterwards. In , 2D predictions are fused by a 3D network to obtain a better prediction with contextual information.
However, until now, U-Net based architectures are still the most powerful models in this area. Recently,  introduced nnU-Net and won the first place in Medical Segmentation Decalthon (MSD) Challenge . They ensemble 2D U-Net, 3D U-Net, and cascaded 3D U-Net. The network is able to dynamically adapt itself to any given segmentation task by analysing the data attributes and adjusting hyper-parameters accordingly. The optimal results are achieved with different combinations of aforementioned networks given various tasks.
2.2 Neural Architecture Search
Neural Architecture Search (NAS) aims at automatically discovering better neural network architectures than human-designed ones. At the beginning stage, most NAS algorithms are based on either reinforcement learning (RL)[1, 41, 42] or evolutionary algorithm (EA) [23, 35]. In RL based methods, a controller is responsible for generating new architectures to train and evaluate, and the controller itself is trained with the architecture accuracy on validation set as reward. In EA based methods, architectures are mutated to produce better off-springs, which are also evaluated by accuracy on validation set. Since parameter sharing scheme was proposed in , more search methods were proposed, such as differentiable NAS approaches  and one-shot NAS approaches , which reduced the search cost to several GPU days or even several GPU hours.
Besides the successes NAS has achieved in natural image recognition, researchers also tried to extend it to other areas such as segmentation  and detection . Moreover, there are also some works applying NAS to medical image segmentation area.  designed a search space consisting of 2D, 3D, and pseudo-3D (P3D) operations, and let the network itself choose between these operations at each layer. [19, 36] use the policy gradient algorithm for automatically tuning the hyper-parameters and data augmentations. In [13, 32], the cell structure is explored with a pre-defined 3D U-Net topology.
3.1 Inconsistency Problem
Early works of NAS [1, 23, 35, 41, 42] typically use a controller based on EA or RL to select network candidates from search space; then the selected architecture is trained and evaluated. Such methods need to train numerous models from scratch and thus lead to an expensive search cost. Recent works [2, 15] propose a differentiable search method that reduces the search cost significantly, where each network is treated as a sub-network of a super-network. However, a critical problem is that the super-network cannot fit into the memory. For these methods, a trade-off is made by sacrificing the network size at search stage and building a deeper one at deployment, which results in an inconsistency problem.  proposed to activate single path of the super-network at each iteration to reduce the memory cost, and  proposed to progressively increase the network size with a reduced approximate search space. However, these methods also face problems when the network topology is included in search. For instance, the progressive manner cannot deal with the network topology. As for single-path methods, since there exist illegal paths in network topology, some layers are naturally trained more times compared to others, which results in a serious fairness problem .
A straightforward way to solve the issue is to train each candidate from scratch respectively, yet the search cost is too expensive considering the magnitude of search space, which may contain millions of candidates or more. Auto-DeepLab  introduces network topology into search space and sacrifices the input size instead of network size at training stage, where it uses a much smaller batch size and crop size. However, it introduces a new inconsistency at input size to solve the old one at network size. Besides, for memory-costly tasks such as 3D medical image segmentation, sacrificing input size is infeasible. The already small input size needs to be reduced to unreasonably smaller to fit the model in memory, which usually leads to an unstable training problem in terms of convergence, and the method only yields a random architecture finally.
3.2 Coarse-to-fine Neural Architecture Search
In order to resolve the inconsistency in network size and input size, and combine NAS with medical image segmentation, we develop a coarse-to-fine neural architecture search method for automatically designing 3D segmentation networks.
Without loss of generality, the architecture search space consists of topology search space , which is represented by a directed acyclic graph (DAG), and cell operation space , which is represented by the color of each node in the DAG. Each network candidate is a sub-graph with color scheme and weights , denoted as .
Therefore, the search space is divided into two parts: a small search space of topology , and a huge search space of operation :
The topology search space is usually small and it is affordable to handle the inconsistency by training each candidate from scratch. For instance, the topology search space only has up to candidates for a network with 12 cells . The operation search space can have millions of candidates, but since topology is given, techniques in NAS for recognition, activating only one path at each iteration, are incorporated naturally to solve the memory limitation. Therefore, by regarding neural architecture search from scratch as a process of constructing a colored DAG, we divide the search procedure into two stages: (1) Coarse stage: search at macro-level for the network topology and (2) Fine stage: search for the best way to color each node, finding the most suitable operation configuration.
We start with defining macro-level and micro-level. Each network consists of multiple cells, which are composed of several convolutional layers. On macro level, by defining how every cell is connected to each other, the network topology is uniquely determined. Once the topology is determined, we need to define which operation each node represents. On micro-level, we assign an operation to each node, which represents the operation inside the cell, such as standard convolution or dilated convolution.
With this two-stage procedure, we first construct a DAG representing network topology, then assign operations to each cell by coloring the corresponding node in the graph. Therefore, a network is constructed from scratch in a coarse-to-fine manner. By separating the macro-level and micro-level, we relieve the memory pressure and thus resolve the inconsistency problem between search stage and deployment stage.
3.3 Coarse Stage: Macro-level Search
In this stage, we mainly focus on searching the topology of the network. A default operation is assigned to each cell, specifically standard 3D convolution in this paper, and the cell is used as the basic unit to construct the network.
Due to memory constraint and fairness problem, training a super-network and evaluating candidates with a weight-sharing method is infeasible, which means each network needs to be trained from scratch. The search on macro-level is formulated into a bi-level optimization with weight optimization and topology optimization:
where represents current topology and denotes a default coloring scheme, standard 3D convolution everywhere, and
is the loss function used at the training stage, andthe accuracy on validation set.
It is extremely time-consuming, especially considering that 3D networks have a heavier computation requirements compared with 2D models. Thus, it is necessary to reduce the search space to make the search procedure more focused and efficient.
We revisit the successful medical image segmentation networks, and we find they all share something in common: (1) an U-shape encoder-decoder topology and (2) skip-connections between the down-sampling paths and the up-sampling paths. We incorporate these priors into our method, and prune the search space accordingly. An illustration of how the priors help prune search space is shown in Fig 3. Therefore, the search space is pruned to and the topology optimization becomes:
To further improve the search efficiency, we propose an evolutionary algorithm based on topology similarity to make use of macro-level properties. The idea is that with assumption of continuous relaxation of topology search space, two similar networks should also share a similar performance. Specifically, we represent each network topology with a code, and we define the network similarity as the euclidean distance between two codes. Smaller the distance is, more similar two networks are. Based on the distance measurement, we classify all network candidates into several clusters with K-means algorithm based on their encoded codes. The evolution procedure is prompted in unit of cluster. In details, when producing next generation, we random sample some networks from each cluster, and rank the clusters by comparing performance of these networks. The higher rank a cluster is, the higher proportion of next generation will come from this cluster. As shown in Fig 4, the topology proposed by our algorithm gradually falls into the most promising cluster, demonstrating the effectiveness of it. To better make use of computation resources, we further implement this EA algorithm in asynchronous manner as shown in Algorithm 1.
3.4 Fine Stage: Micro-level Search
After the topology of the network is determined, we further search the model at a fine-grained level by replacing the operations inside each cell. Each cell is a small fully convolutional module, which takes 1 or 2 input tensors and outputs 1 tensors. Since the topology is pre-determined in coarse stage, cellis simply represented by its operations , which is a subset of the possible operation set . Our cell structure is much simpler compared with , this is because there is a trade-off between the cell complexity and cell numbers. Given the tense memory requirement of 3D model, we prefer more cells instead of more complex cell structure.
The set of possible operations, , consisting of the following 3 choices: (1) 3D convolution; (2) followed by P3D convolution; (3) 2D convolution;
Considering the magnitude of fine stage search space, training each candidate from scratch is infeasible. Therefore, to address the problem of memory limitation while making search efficient, we adopt single-path one-shot NAS with uniformly sampling  as our search method. In details, we construct a super-network where each candidate is a sub-network of it, and then at each iteration of the training procedure, a candidate is uniformly sampled from the super-network and trained and updated. After the training procedure ends, we do random search for final operation configuration. That is to say, at searching stage, we random sample candidates, and each candidate is initialized with the weights from trained super-network. All these candidates are ranked by validation performance, and the one with highest accuracy is finally picked.
Therefore, optimization of fine stage is in single-path one-shot NAS manner with uniformly sampling, which is formulated as:
where is the search space of fine stage, all possibles combinations of operations.
After the coarse stage is finished, the topology is obtained. And the operation configuration comes from the fine stage. Therefore, the final network architecture is constructed.
In this section, we firstly introduce our implementation details of C2FNAS, and then report our found architecture (searched on MSD Pancreas dataset) with semantic segmentation results on all 10 MSD datasets , which is a public comprehensive benchmark for general purpose algorithmic validation and testing covering a large span of challenges, such as small data, unbalanced labels, large-ranging object scales, multi-class labels, and multi-modal imaging, . It contains 10 segmentation datasets, Brain Tumours, Cardiac, Liver Tumours, Hippocampus, Prostate, Lung Tumours, Pancreas Tumours, Hepatic Vessels, Spleen, Colon Cancer.
4.1 Implementation Details
Coarse Stage Search.
At coarse stage search, the network has 12 cells at total, where 3 of them are down-sampling cells and 3 up-sampling cells, so that the model size is moderate. With the priors introduced in Section 3, the search space is largely reduced from to .
For network architecture, we define one stem module at the beginning of the network, and another one at the end. The beginning module consists of two 3D
convolution layers, and strides are 1, 2 respectively. The end module consists of two 3Dconvolution layers, and a trilinear up-sampling layer between the two layers. Each cell takes the output of its previous cell as input, and it will also take another input if it satisfies (1) it has a previous-previous cell at the same feature resolution level, or (2) it is the first cell after an up-sampling. In situation (1), the cell takes its previous-previous cell’s output as additional input. And in situation (2), it takes the output of last cell before the corresponding down-sampling as another input, which serves as the skip-connection from encoder part to decoder part. A convolution with kernel size serves as pre-processing for the input. The two inputs go through convolution separately and get summed afterwards, then a convolution is applied to the output. The filter number starts with 32, and it is doubled after a down-sampling layer and halved after an up-sampling layer. All down-sampling operations are implemented by a
3D convolution with stride 2, and up-sampling by a trilinear interpolation with scale factor 2 followed by aconvolution. Besides, in coarse stage we also set the operations in all cells to standard 3D convolution with kernel size of .
For evolutionary algorithm part, we firstly represent each network topology with a code, which is a list of numbers and the length is the same as cell numbers. The number starts with 0 and increases one after a down-sampling and decreases one after an up-sampling. We use K-means algorithm to classify all candidates into 8 clusters based on the euclidean metric of corresponding codes. At beginning, two networks are randomly sampled from each cluster. Afterwards, whenever there is an idle GPU, one trained network is sampled from each cluster, and the cluster which the best network belongs to is picked and a new network is sampled from that cluster for training. Meanwhile, the algorithm also random samples a cluster with the probability 0.2 to add randomness and avoid local minimum. After 50 networks are evaluated, the algorithm terminates and returns the best network topology it has found.
We conduct the coarse stage search on the MSD Pancreas Tumours dataset, which contains 282 3D volumes for training and 139 for testing. The dataset is labeled with both pancreatic tumours and normal pancreas region. We divide the training data into 5 folds sequentially, where first 4 folds for training and last fold for validation purpose. To address anisotropic problem, we re-sample all cases to an isotropic resolution with voxel distance 1.0 for each axis as data pre-processing.
At training stage, we use batch size of 8 with 8 GPUs, and patch size of , where two patches are randomly cropped from each volume at each iteration. All patches are randomly rotated by and flipped as data augmentation. We use SGD optimizer with learning rate of 0.02, momentum of 0.9, and weight decay of 0.00004. Besides, there is a multi-step learning rate schedule which decay the learning rate at iterations with a factor 0.5. We use 1000 iterations for warm-up stage, where the learning rate increases linearly from 0.0025 to 0.02, and 20000 iterations for training. The loss function is the summation of Dice Loss and Cross-Entropy Loss, and we adopt Instance Normalization 28] to speed up the multi-GPU training procedure.
|Task||Lung||Hippocampus||HepaticVessel||Spleen||Colon||Avg (Task)||Avg (Class)|
|Task||Lung||Hippocampus||HepaticVessel||Spleen||Colon||Avg (Task)||Avg (Class)|
At validation stage, we test the network in a sliding window manner, where the stride = 16 for all axes. Dice-Sørensen coefficient (DSC) metric is used to measure the performance, which is formulated as , where and denote for the prediction and ground-truth voxels set for a foreground class. The DSC has a range of with 1 implying a perfect prediction.
|Model||Params (M)||FLOPs (G)|
|3D U-Net ||19.07||825.30|
|Attention U-Net ||103.88||1162.75|
Fine Stage Search.
In the fine stage search, we mainly choose the operations from 2D, 3D, P3D for each cell. This search space can be large as . Since the search space is numerous, we adopt a single-path one-shot NAS method based on super-network, which is trained by uniformly sampling.
The data pre-processing, data split, and training/validation setting are exactly the same as what we use in coarse stage, except that we double the amount of iterations to ensure the super-network convergence. At each iteration, a random path is chosen for training. After the super-network training is finished, we random sample 2000 candidates from the search space, and use the super-network weight to initialize these candidates. Since the validation process takes a very long time due to sliding window method, we increase the stride to 48 at all axes to speed up the search stage.
The coarse search stage takes 5 days with 64 NVIDIA V100 GPUs with 16GB memory. In fine stage, the super-network training costs 10 hours with 8 GPUs, and the searching procedure, where 2000 candidates are evaluated on validation set, takes 1 days with 8 GPUs. The large search cost is mainly because training and evaluating a 3D model itself is very time-consuming.
The final network architecture based on the topology searched in coarse stage and operations searched in fine stage is shown in Fig 5. We keep the training setting same when deploying this network architecture, which means no inconsistency exists in our method.
We use the same training setting mentioned in coarse stage, and the iteration is 40000 and multi-step decay at iterations . The model is trained based on same settings from scratch for each dataset, except that Prostate dataset has a very small size on Z (Axial) axis, and Hippocampus dataset has a very small shape around only 50 for each axis. Therefore we change the patch size to and stride = for Prostate, and up-sample all data to shape for Hippocampus.
4.2 Segmentation Results
In this part, we report our test set results of all 10 tasks from MSD challenge and compare with other state-of-the art methods.
Our test set results are summarized in Table 1. We notice that other methods apply multi-model ensemble to reinforce the performance, nnU-Net ensembles 5 or 10 models based on 5-fold cross-validation with one or two models, NVDLMED and CerebriuDIKU ensemble models trained from different viewpoints. Therefore, besides single-model result, we also report results with a 5-fold cross-validation model ensemble, which means 5 models are trained in a 5-fold cross-validation setting, and final test results are fused with results from these 5 models with a majority voting.
Our model shows superior performance than state-of-the-art methods on most tasks, especially the challenging ones. We also has a higher performance in terms of average on task/class. It is noticeable that the previous state-of-the-art nnU-Net uses various kinds of data augmentation and test-time augmentation to boost the performance, while we only adopt simple data augmentation of rotation and flip, and no test-time augmentation is applied. Small datasets such as Heart and Hippocampus rely more on augmentation while a powerful architecture is easy to get over-fitting, which illustrates why our performance on these datasets does not outperform the competitors. Besides, nnU-Net uses different networks and hyper-parameters for each task, while we use the same model and hyper-parameters for all task, showing that our model is not only more powerful but also much more robust and generalizable. Some visualization comparisons are available in Fig 6.
5 Ablation Study
5.1 Coarse Stage versus Fine Stage
To verify the improvement of this two-stage design, we compare the performance of network from coarse stage and network from fine stage. The “C2FNAS-C-Panc” indicates the coarse stage network searched on pancreas dataset, where the topology is searched and all operations are in standard 3D manner, while “C2FNAS-F-Panc” is the fine stage network, where the operation configuration is searched. We compare their performance on pancreas and lung dataset with a 5-fold cross-validation. The result is shown in table 4. It is noticeable that the fine stage search not only improves the performance on target dataset (pancreas), but also increases the model generality, thus obtains a better performance on other datasets (lung).
5.2 Search on Different Datasets
Our model is searched on MSD Pancreas dataset, which contains 282 cases, and it is one of the largest dataset in MSD challenge. To verify the data number effect on our method, we also search a model topology on MSD Lung dataset, which contains 64 cases, as ablation study. The search method and hyper-parameters are same as what we use on pancreas dataset. The result is summarized in Table 4. The “C2FNAS-C-Lung” is the topology on lung dataset, while “C2FNAS-C-Panc” is the topology on pancreas dataset. Topology on lung dataset performs better on lung task, while topology on pancreas dataset performs better on pancreas task. However, it is noticeable that both topologies show good performance on another dataset, demonstrating that our method works well even on a smaller dataset, and that the models are of great generality.
5.3 Incorporate Model Scaling as Third Stage
Inspired by EfficientNet , we add model scaling into the search space as the third search stage. In this ablation study, we only study for scaling of filter numbers for simplicity, but a compound scaling including patch size and cell numbers is feasible. Following , we adopt grid search for a channel number multiplier ranging from 0.25 to 2.0 with a step of 0.25. We report the results based on single fold validation set on pancreas and lung dataset respectively, which are summarized in Table 5. It shows that model scaling can increase the model capacity and lead to a better performance. Nevertheless, scaling up the model also results in a much higher model parameters and FLOPs. Considering the large extra computation cost and to keep the model in a moderate size, we do not include model scaling into our main experiment. Yet we report it in ablation study as a potential and promising way to reinforce C2FNAS and achieve a even higher performance.
In this paper, we propose to use coarse-to-fine neural architecture search to automatically design a transferable 3D segmentation network for 3D medical image segmentation, where the existing NAS methods cannot work well due to the memory-consuming property in 3D segmentation. Besides, our method, with the consistent model and hyper-parameters for all tasks, outperforms MSD champion nnU-Net, a series of well-modified and/or ensembled 2D and 3D U-Net. We do not incorporate any attention module or pyramid module, which means this is a much more powerful 3D backbone model than current popular network architectures.
-  (2017) Designing neural network architectures using reinforcement learning. ICLR. Cited by: §2.2, §3.1.
-  (2018) SMASH: one-shot model architecture search through hypernetworks. ICLR. Cited by: §1, §2.2, §3.1.
-  (2019) Proxylessnas: direct neural architecture search on target task and hardware. ICLR. Cited by: §1, §3.1.
-  (2018) VoxResNet: deep voxelwise residual networks for brain segmentation from 3d mr images. NeuroImage 170, pp. 446–455. Cited by: Table 3.
-  (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. PAMI 40 (4), pp. 834–848. Cited by: §2.1.
-  (2019) Progressive differentiable architecture search: bridging the depth gap between search and evaluation. ICCV. Cited by: §1, §3.1.
-  (2019) Fairnas: rethinking evaluation fairness of weight sharing neural architecture search. arXiv preprint arXiv:1907.01845. Cited by: §3.1.
-  (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In MICCAI, Cited by: §1, §2.1, Table 3.
-  (2019) Nas-fpn: learning scalable feature pyramid architecture for object detection. In CVPR, pp. 7036–7045. Cited by: §1, §2.2.
-  (2019) Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420. Cited by: §1, §1, §3.4.
-  (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §1, §2.1.
-  (2018) Nnu-net: self-adapting framework for u-net-based medical image segmentation. arXiv preprint arXiv:1809.10486. Cited by: §2.1, Table 1, Table 2.
-  (2019) Scalable neural architecture search for 3d medical image segmentation. arXiv preprint arXiv:1906.05956. Cited by: §1, §2.2.
-  (2019) Auto-deeplab: hierarchical neural architecture search for semantic image segmentation. In CVPR, pp. 82–92. Cited by: §1, §1, §2.2, §3.1, §3.2, §3.4.
-  (2019) Darts: differentiable architecture search. ICLR. Cited by: §1, §2.2, §3.1.
-  (2018) 3d anisotropic hybrid network: transferring convolutional features from 2d images to 3d anisotropic volumes. In MICCAI, pp. 851–858. Cited by: §2.1.
-  (1982) Least squares quantization in pcm. IEEE transactions on information theory 28 (2), pp. 129–137. Cited by: §3.3.
-  (2016) V-net: fully convolutional neural networks for volumetric medical image segmentation. In 3DV, pp. 565–571. Cited by: §1, §2.1, Table 3.
-  (2018) Automatically designing cnn architectures for medical image segmentation. In MLMI, pp. 98–106. Cited by: §2.2.
-  (2018) Attention u-net: learning where to look for the pancreas. MIDL. Cited by: §2.1, Table 3.
-  (2019) One network to segment them all: a general, lightweight system for accurate 3d medical image segmentation. In MICCAI, pp. 30–38. Cited by: Table 1, Table 2.
-  (2018) Efficient neural architecture search via parameter sharing. ICML. Cited by: §2.2.
-  (2019) Regularized evolution for image classifier architecture search. In AAAI, Vol. 33, pp. 4780–4789. Cited by: §1, §2.2, §3.1.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, pp. 91–99. Cited by: §2.1.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241. Cited by: §2.1.
-  (2015) Deeporgan: multi-level deep convolutional networks for automated pancreas segmentation. In MICCAI, Cited by: §2.1.
-  (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In CVPR, pp. 4510–4520. Cited by: §1.
Horovod: fast and easy distributed deep learning in tensorflow. arXiv preprint arXiv:1802.05799. Cited by: §4.1.
-  (2019) A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063. Cited by: §1, §2.1, §4.
-  (2019) EfficientNet: rethinking model scaling for convolutional neural networks. ICML. Cited by: §5.3.
-  (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §4.1.
-  (2019) NAS-unet: neural architecture search for medical image segmentation. IEEE Access 7, pp. 44247–44257. Cited by: §1, §2.2.
3D semi-supervised learning with uncertainty-aware multi-view co-training. WACV. Cited by: Table 1, Table 2.
-  (2018) Bridging the gap between 2d and 3d organ segmentation with volumetric fusion net. In MICCAI, pp. 445–453. Cited by: §2.1.
-  (2017) Genetic cnn. In ICCV, pp. 1379–1388. Cited by: §2.2, §3.1.
-  (2019) Searching learning strategy with reinforcement learning for 3d medical image segmentation. In MICCAI, pp. 3–11. Cited by: §2.2.
Recurrent saliency transformation network: incorporating multi-stage visual cues for small organ segmentation. In CVPR, pp. 8280–8289. Cited by: §2.1.
-  (2017) A fixed-point model for pancreas segmentation in abdominal ct scans. In MICCAI, pp. 693–701. Cited by: §2.1.
-  (2019) V-nas: neural architecture search for volumetric medical image segmentation. 3DV. Cited by: §1, §1, §2.2.
-  (2018) A 3d coarse-to-fine framework for volumetric medical image segmentation. In 3DV, Cited by: Table 3.
-  (2017) Neural architecture search with reinforcement learning. ICLR. Cited by: §1, §2.2, §3.1.
-  (2018) Learning transferable architectures for scalable image recognition. In CVPR, pp. 8697–8710. Cited by: §1, §2.2, §3.1.