CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation

02/13/2021 ∙ by Shengcong Chen, et al. ∙ The University of Sydney South China University of Technology International Student Union 0

Nucleus segmentation is a challenging task due to the crowded distribution and blurry boundaries of nuclei. Recent approaches represent nuclei by means of polygons to differentiate between touching and overlapping nuclei and have accordingly achieved promising performance. Each polygon is represented by a set of centroid-to-boundary distances, which are in turn predicted by features of the centroid pixel for a single nucleus. However, using the centroid pixel alone does not provide sufficient contextual information for robust prediction. To handle this problem, we propose a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation. First, we sample a point set rather than one single pixel within each cell for distance prediction. This strategy substantially enhances contextual information and thereby improves the robustness of the prediction. Second, we propose a Confidence-based Weighting Module, which adaptively fuses the predictions from the sampled point set. Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains the shape of the predicted polygons. Here, the SAP loss is based on an additional network that is pre-trained by means of mapping the centroid probability map and the pixel-to-boundary distance maps to a different nucleus representation. Extensive experiments justify the effectiveness of each component in the proposed CPP-Net. Finally, CPP-Net is found to achieve state-of-the-art performance on three publicly available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper will be released.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 5

page 6

page 7

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (a) Exemplar patches that contain touching nuclei. (b) As nuclei tend to overlap with each other, the bounding box for one instance may also cover other nuclei. (c) The boundaries between touching nuclei tend to be blurry, which increases the difficulty of the nucleus segmentation task.

Nucleus segmentation is a process aimed at detecting and delineating each nucleus in microscopy images. This process is capable of providing rich spatial and morphological information about nuclei; therefore, it plays an important role in many cell analysis applications, such as cell-counting, cell-tracking, phenotype classification and treatment planning [1]. Manual nucleus segmentation is time-consuming, meaning that automatic nucleus segmentation methods have become increasingly necessary.

However, automatic nucleus segmentation still remains a challenging task in terms of robustness due to the crowded distribution of nuclei and their blurry boundaries, as illustrated in Fig. 1. Unlike objects in natural images, nuclei tend to overlap with each other. As a result, the bounding box for one instance often covers other nuclei, which negatively impacts the robustness of traditional bounding box-based detection methods, such as Mask R-CNN [2]. Another major challenge lies in the blurry boundary between touching nuclei, which increases the difficulty of inferring their boundaries.

A large number of approaches have been proposed to handle the above challenges [3, 7, 8, 9, 11, 4, 12, 5, 17, 20, 21]. For example, Chen et al. [3] differentiate instances of nuclei according to their boundaries. Graham et al. [4] represent nucleus instances using pixel-to-centroid distance maps in both the horizontal and vertical directions. Koohbanani et al. [5] infer nucleus instances by clustering bounding boxes predicted on each pixel within nuclei. When attempting to finally obtain nucleus instances, the above approaches typically resort to complex post-processing operations, such as morphologic operations [3], watershed algorithms [11, 4, 12], and clustering [5]. Several recent works [13, 29, 14] represent each instance using a polygon, which is realized by predicting a set of centroid-to-boundary distances. They require only light-weight post-processing operations, i.e., non-maximum suppression, to remove redundant proposals; therefore, their pipelines are more straightforward and efficient.

However, these approaches predict polygons using features of the centroid pixel for each instance only, whereas the centroid alone lacks contextual information [30, 31]. In particular, the centroid is located far away from boundary pixels for large-sized nuclei, which degrades the distance prediction accuracy. Moreover, supervision is imposed on each respective distance value and there is a lack of global constraint on the shape of each nucleus.

In this paper, we propose a Context-aware Polygon Proposal Network (CPP-Net) to improve the robustness of polygon-based methods [13] for nucleus segmentation. The contributions of this paper are made from three perspectives. First, CPP-Net explores more contextual information to improve the prediction accuracy for the centroid-to-boundary distances; specifically, it adopts the StarDist [13] model to conduct initial distance prediction along a set of pre-defined directions. It then samples a set of points between the centroid and the initially predicted boundary along each direction. As these points are closer to the boundary than the centroid pixel, their distance to the ground-truth boundary can be predicted much more accurately. Correspondingly, the initially predicted centroid-to-boundary distance value can be refined with reference to the predictions for those sampled points.

Second, the prediction confidence of these sampled points typically varies according to their feature quality. For example, the errors contained in the distances initially predicted by StarDist [13] can be amplified in case where some sampled points actually fall outside the nucleus. Accordingly, the weights of the sampled points should change depending on their prediction confidence. We therefore propose a Confidence-based Weighting Module (CWM) that adaptively fuses the predicted distances for these points. With the assistance of CWM, CPP-Net can more robustly utilize contextual information from the sampled points.

Third, we introduce a novel Shape-Aware Perceptual (SAP) loss, which constrains CPP-Net’s predictions regarding the nucleus shape. The original perceptual loss [32] penalizes the differences in the hidden feature maps of a pre-trained classification network between two input images. To encode the shape information of the nucleus into the perceptual loss, we train an encoder-decoder model that maps the representation of nucleus shape in CPP-Net, i.e., the pixel-to-boundary distance maps and the centroid probability map, to other shape representations, such as nucleus bounding boxes. By being trained in this way, this model is capable of extracting rich shape information related to nuclei. We then adopt the encoder part to extract feature maps for the predictions and the ground-truth output of CPP-Net, respectively. The SAP loss penalizes the differences between these extracted feature maps. In this way, the shapes of nuclei during training are constrained.

In this paper, we conduct ablation study on the proposed components in CPP-Net on the DSB2018 [1] and BBBC006 [33] databases. Our experimental results justify the effectiveness of these components. Finally, we compare the performance of CPP-Net with state-of-the-art methods on DSB2018, BBBC006, and PanNuke [34, 35]; under these circumstances, CPP-Net consistently achieves state-of-the-art performance.

The remainder of this paper is organized as follows. Related works on nucleus segmentation are reviewed briefly in Section 2. The proposed methods are described in Section 3, while implementation details are presented in Section 4. Experimental results are presented in Section 5, along with their analysis. Finally, we conclude this paper in Section 6.

2 Related Works

A number of effective approaches for nucleus segmentation have been proposed. In this section, we divide the recent researches into two categories, namely traditional methods and deep-learning based methods.

Many traditional methods are based on the watershed algorithm [22, 23, 24]. For example, Malpica et al. [22] proposed a morphological watershed-based algorithm, which is assisted by means of empirically designed image processing operations. This approach utilizes both intensity and morphology information for nucleus segmentation. However, this is likely to cause over-segmentation, and also results in limitations in the processing of overlapping nuclei [23, 24]. Yang et al. [23] proposed a new marker extraction method based on condition erosion to alleviate the over-segmentation problem. Tareef et al. [24] proposed a Multi-Pass Fast Watershed method that adaptively and efficiently segments overlapping cervical cells. Moreover, the active contour model (ACM) has also been widely adopted for nucleus segmentation [25, 26]. For example, Molna et al. [26] proposed to promote the performance of ACM by exploring prior knowledge, specifically the understanding that nuclei usually have ellipse-shaped boundaries. Other traditional methods, such as level-set [27] and template-matching [28], have also been adopted for nucleus segmentation. The common downside of traditional methods is that they typically require hand-crafted features, which depend on human expertise and have limitations in terms of their representation power.

In recent years, deep-learning based approaches have achieved notable success on nucleus segmentation tasks [6, 3, 7, 8, 9, 10, 11, 15, 4, 12, 5, 17, 20, 21, 16, 14, 18, 13, 19]. These works can be further categorized into two-stage and one-stage methods.

Two-stage methods consist of a detection stage, which locates nucleus instances, and a segmentation stage, which predicts a foreground mask for each instance. One representative method of this kind is Mask R-CNN [2, 18], which detects nucleus instances using bounding boxes. However, the shape of nuclei tends to be elliptical, and severe occlusion typically exists between instances; this means that each bounding box may contain pixels representing two or more instances, indicating that bounding boxes may be ultimately sub-optimal for nucleus segmentation [13, 16]. To handle this problem, SpaNet [5] detects instance centroids and performs semantic segmentation in its first stage. In its second stage, it predicts the bounding box of the associated instance according to the feature of each foreground pixel. Finally, it separates overlapping nuclei by clustering the above pixel-wise predictions using the centroids as clustering centers. Moreover, BRP-Net [16] is also a two-stage network. It includes a detection stage, which generates region proposals based on instance boundaries, and a refinement stage, which refines the foreground area of each instance. Notably, neither SpaNet [5] nor BRP-Net [16] is designed in an end-to-end manner, which increases the complexity of the entire system.

By contrast, one-stage methods adopt a single network. Based on the network prediction, they utilize post-processing operations to obtain nucleus instances. Depending on the network prediction property being utilized, one-stage methods can be further subdivided into classification-based models and regression-based models.

As the name suggests, classification-based models output classification probability maps. Existing works in this sub-category include boundary-based [6, 9, 10, 3, 7, 8] and connectivity-based [17] methods. Boundary-based methods typically include a boundary detection branch and a semantic segmentation branch [3, 7, 8]; for example, DCAN [3] constructs two separate decoders for boundary detection and semantic segmentation, respectively. Because these two tasks are related, BES-Net [7] and CIA-Net [8] respectively introduce uni- and bi-directional connections between the two branches. These methods process images in the RGB color space. In comparison, Zhao et al. [9] leveraged the optical characteristics of Haemotoxylin and Eosin (H&E) staining, and proposed a Hematoxylin-aware Triplet U-Net, which makes predictions with reference to the extracted Hematoxylin component in the image. By subtracting instance boundaries from the segmentation maps, overlapped nuclei can be separated; the downside of this is that such a subtraction operation may result in a loss of segmentation accuracy [16]. Moreover, we term PatchPerPix [17] as a connectivity-based method, since the prediction it makes indicates whether a pixel is located in the same instance as each of its neighbors. Due to the advantages it offers in the context of describing the local shape of instances in small patches, PatchPerPix is capable of segmenting instances with sophisticated shapes.

In comparison, the regression-based models output regression maps, e.g., distances or coordinate offsets for each pixel of the input image. For example, HoVer-Net [4] predicts the distances from each foreground pixel to its corresponding nucleus centroid in both the horizontal and vertical directions. It then employs the marker-controlled watershed algorithm as post-processing to obtain nucleus instances. The performance of these approaches is affected by the empirically designed post-processing strategies. Recently, Schmidt et al. [13] proposed the StarDist approach, which predicts both the centroid probability maps and distances from each foreground pixel to its associated instance boundary along a set of pre-defined directions. In the post-processing step, StarDist generates polygon proposals based on the set of predicted distances for each centroid pixel. Each polygon represents one nucleus instance. In this method, polygons are predicted using the features of the centroid pixel only; as a result, contextual information for large-sized nucleus instances is lacking, which affects the prediction accuracy.

Our proposed CPP-Net is a one-stage method and relates closely to StarDist [13]. CPP-Net improves the robustness of StarDist by integrating rich contextual information from a sampled point set for each centroid pixel. Moreover, CPP-Net adopts a novel Shape-Aware Perceptual loss, that constrains CPP-Net’s predictions according to the shape prior of nuclei.

3 Methods

3.1 Overview

Figure 2: The architecture of CPP-Net. This model adopts U-Net as its backbone, which makes three types of predictions for each input image: the pixel-to-boundary distance maps , the prediction confidence maps , and the centroid probability maps . In this figure, we take the -th direction as an illustrative example. The Context Enhancement Module (CEM) conducts sampling on according to Eq. (1). Coordinates of the sampled points are computed according to Eq. (2) and Eq. (3). The Confidence-based Weighting Module (CWM) performs sampling on in the same location as above. It then produces weights that are used to fuse the distance predictions of the sampled points. In this way, CPP-Net predicts the refined pixel-to-boundary distance maps, i.e., , more robustly through the use of rich contextual information. Best viewed in color.

Fig. 2 presents the structure of CPP-Net for nucleus segmentation. The backbone of CPP-Net is a simple U-Net. Three parallel convolutional (Conv) layers are attached to the backbone. These layers predict the pixel-to-boundary distance maps , the confidence maps , and the centroid probability map , respectively. and represent the height and width of the image, respectively. For clarity, we denote the coordinate space of the input image as and the total number of elements in as . The same as [13], each element in the -th channel of refers to the distance between a foreground pixel and the boundary of its associated instance along the -th pre-defined direction. denotes the number of total directions. Elements in indicate the probability of each foreground pixel being the instance centroid.

In what follows, we first propose a Context Enhancement Module (CEM), which samples a point set to explore more contextual information for pixel-to-boundary distance prediction. We then design a Confidence-based Weighting Module (CWM) that adaptively combines the predictions from the sampled points. Finally, we introduce the Shape-Aware Perceptual (SAP) loss, which further promotes the segmentation accuracy.

3.2 Context Enhancement Module

The nucleus segmentation task comprises two subtasks: instance detection and instance-wise segmentation. The recently developed StarDist approach [13] performs these two subtasks in parallel. The first detects the centroid of each nucleus, whereas the second segments each instance using a polygon, which is represented using the distances from the centroid pixel to the instance boundary along pre-defined directions. In [13], the distances are predicted using only the features of the centroid. However, the size of nuclei may vary dramatically, meaning that the centroid pixel alone may lack contextual information for precise distance predictions.

To handle the above problem, we propose CEM, which utilizes pixels that are closer to the boundaries to refine the distance prediction. To achieve this goal, CEM first samples points between each pixel and its predicted boundary position along each direction. It then merges the predicted pixel-to-boundary distances of the points, and adaptively updates the pixel-to-boundary distance of the initial pixel. Formally speaking, the refined pixel-to-boundary distance along the -th direction for one pixel can be obtained as follows:

(1)

where denotes the initially predicted pixel-to-boundary distance in along the -th direction for . , where indexes the sampling directions. is equal to . In this paper, we uniformly sample the points between the initial pixel and its predicted boundary along each specified direction. The coordinates for the -th sampled point are accordingly computed as follows:

(2)
(3)

Finally, in Eq. (1) denotes the weight of the -th sampled point. One simple weighting strategy for use is averaging, i.e., setting all to .

3.3 Confidence-based Weighting Module

Although the averaging strategy is effective for Eq. (1), it is also sub-optimal as it neglects the impact of prediction quality on the sampled points. Prediction quality is affected by both image quality and the position of the sampled points. In particular, sampled points near to the boundary may actually lie outside of the nucleus, as in Eq. (1) contains errors. Therefore, the prediction accuracy on the sampled points is variable. Accordingly, we propose a Confidence-based Weighting Module (CWM) that adaptively fuses predictions on these sampled points.

As Fig. 2 illustrates, we attach an extra Conv layer to the backbone model in order to produce confidence maps , the sizes of which are the same as those of . Each element in measures the prediction confidence for the corresponding element in . We then perform sampling on both and using coordinates computed according to Eq. (2) and Eq. (3

) along each sampling direction, respectively. Sizes of the resulting tensors are therefore

for each direction. The tensor sampled from is fed into a

Conv layer and a Softmax layer. The output dimension of the

Conv layer is also . The Softmax layer outputs the normalized weights; these normalized weights are used as in Eq. (1). It is worth noting that the sampling directions share parameters of the Conv layer.

3.4 Loss Functions

Figure 3: Illustration of the SAP loss. The transformation model in the left sub-figure converts the instance representations utilized in CPP-Net to other forms of instance representation. After the training of the transformation model is completed, the parameters of its encoder are fixed. The encoder can extract high-level shape features of the nuclei; and is therefore used as a shape-aware feature extractor in the SAP loss, as shown in the right sub-figure.

The StarDist model [13] utilizes two loss terms: the binary cross entropy loss for centroid probability prediction, and the weighted L1 loss for pixel-to-boundary distance regression. These two loss terms are formulated as follows:

(4)
(5)
(6)

where and represent elements in the ground-truth and predicted centroid probability maps, respectively. We follow the same process as that outlined in [13] to obtain the ground-truth centroid probability map, i.e., utilizing the normalized pixel-to-boundary distance map as centroid probability map. and denote elements of the ground-truth and predicted pixel-to-boundary maps respectively along the -th direction.

For CPP-Net, there are two predicted distance maps, namely and . is predicted by the backbone model, while represents the final pixel-to-boundary distance prediction by CPP-Net. Accordingly, we modify Eq. (5) for CPP-Net as follows:

(7)

where denotes the refined pixel-to-boundary distance in along the -th direction for .

Eq. (5) and Eq. (7) penalize the prediction error in each respective pixel-to-boundary distance value, while the overall shapes of nucleus instances are ignored. In fact, nucleus instances typically have similar shapes; this can be utilized as prior knowledge to facilitate accurate nucleus segmentation. However, it is challenging to explicitly represent the overall shape of a single nucleus instance. To deal with this problem, we adopt an implicit approach inspired by the perceptual loss [32]

, which is proposed for style transformation and super-resolution tasks. In

[32]

, a network pre-trained for image classification on ImageNet

[37] is used as a feature extractor, with the differences between the extracted features of one image pair being penalized. This approach encourages the high-level information of the two images to be similar. Inspired by the original perceptual loss, we propose a Shape-Aware Perceptual (SAP) loss for nucleus segmentation.

The aim of the SAP loss is to penalize the differences in shape feature between the predicted and ground-truth nucleus representations. To encode the shape information in a deep model, we propose transforming the nucleus representations in CPP-Net, i.e., the pixel-to-boundary distance maps and the centroid probability map , to other representation forms [13, 8, 4, 5]. This transformation is accomplished using an encoder-decoder structure as illustrated in Fig. 3.

This paper mainly considers two nucleus representation strategies: first, the semantic segmentation and boundary detection maps in boundary-based approaches [8]; second, the location and the size of the associated bounding box for each nucleus. During training of the transformation model, we concatenate the ground-truth and for each image to create the inputs. The binary cross-entropy loss and L1 loss are adopted for the two target representation strategies, respectively.

After training is completed, we adopt the encoder of the transformation model for the SAP loss to train CPP-Net. The SAP loss can be formulated as follows:

(8)
(9)
(10)

where denotes the 2D coordinate space of the extracted shape-aware feature maps, while and

are the vectors in

and at the location of , respectively. Moreover, denotes the encoder of the pre-trained transformation model. The parameters of are fixed during the training of CPP-Net. Finally, the entire loss of CPP-Net is summarized as follows:

(11)

In the interests of simplicity, we adopt equal weights for the three terms in .

4 Experimental Setup

To justify the effectiveness of CPP-Net, we conduct extensive experiments on three publicly available datasets, i.e., DSB2018 [1], BBBC006 [33], and PanNuke [34].

4.1 Datasets

4.1.1 Dsb2018

Data Science Bowl 2018 (DSB2018) [1] is a nucleus detection and segmentation competition, in which a dataset of 670 images and manual annotations are available. To facilitate fair comparisons with existing approaches, we follow the evaluation protocol outlined in [13]. In this protocol, the training, validation, and testing sets include 380, 67, and 50 images, respectively.

4.1.2 Bbbc006

Images in BBBC006 [33] were captured by one 384-well microplate containing stained U2OS cells. Two fields of view are selected for each well to obtain images. There are two images for each field of view: one Hoechst image and one phalloidin image. Accordingly, BBBC006 contains 1,536 images from 768 fields of view. In our experiments, we randomly divide the dataset into training, validation, and testing sets, which contains 924, 306, and 306 images, respectively.

4.1.3 PanNuke

PanNuke[34, 35] is an H&E stained image set, containing 7,904

patches from a total of 19 different tissue types. The nuclei are classified into neoplastic, inflammatory, connective/soft tissue, dead, and epithelial cells. We follow the evaluation protocol outlined in

[35], which divides the patches into three folds containing 2,657, 2,524, and 2,723 images, respectively. Three different dataset splits are then made based on these three folds. One fold of data is used for training, with the remaining two folds used as validation and testing sets respectively.

4.2 Implementation Details

On DSB2018 and BBBC006, we adopt a very similar U-Net backbone as that used in [13]

for CPP-Net to facilitate fair comparison. This backbone includes three down-sampling blocks in its encoder and three up-sampling blocks in its decoder. The only change is that we replace all Batch Normalization (BN) layers

[38] with Group Normalization (GN) layers [39], since we use a small batch size of 1 for training. On PanNuke, we make two changes to this backbone. First, to ensure fair comparison with existing approaches [4], we replace the encoder of this backbone with ResNet-50 [40]

, and initialize its weights with those pre-trained on ImageNet

[37]

. Second, we attach another decoder to classify nucleus types for each input image pixel. Loss functions for this decoder include the summation of the Cross Entropy loss and the Dice loss

[41].

We adopt a deeper structure for the encoder-decoder model in the SAP loss. This model includes four down-sampling and four up-sampling blocks that are used to extract more high-level information. The other details of the architecture are the same as the U-Net backbone in CPP-Net, except that the encoder-decoder model does not utilize shortcuts.

The Adam algorithm [42] is employed for optimization. The initial learning rate is set to , and is reduced through multiplying by 0.5 if the validation loss no longer reduces. The training process halts if the learning rate is reduced to less than . We adopt online data augmentation of random rotation and horizontal flipping during training. As for the encoder-decoder model, we use the same training settings outlined as above, except that data augmentation is not employed.

4.3 Evaluation Metrics

For DSB2018 and BBBC006, we adopt the same evaluation metric as in

[1] and [13]. According to the metric, the average precision (AP) with IoU thresholds ranging from 0.5 to 0.9 with a step size of 0.05 are computed. For the PanNuke database, we adopt the Panoptic Quality (PQ) presented in [34] as the evaluation metric. PQ has been widely adopted in panoptic segmentation tasks and was introduced into nucleus segmentation in [4]. We report the PQs of all 19 tissues. Besides, both multi-class PQ (mPQ) and binary PQ (bPQ) are computed for evaluation. The mPQ averages the PQ performance on each of the five nucleus categories, while the bPQ directly computes the overall performance on images of all five nucleus categories.

5 Experimental Results

In what follows, we first conduct experiments on two publicly available databases, DSB2018 [13] and BBBC006 [33], to determine the optimal number of sampling points and demonstrate the effectiveness of the CEM module. We then justify the effectiveness of the CWM module and the SAP loss. Finally, we compare the performance of CPP-Net with other methods on all three databases.

5.1 Evaluation of CEM

In this experiment, we evaluate the optimal number of sampling points in CEM. To facilitate clean comparison, we remove the SAP loss for CPP-Net, and consistently adopt CWM as the weighting strategy in Eq. (1). We further change the number of sampling points, i.e., , from 0 to 7, and report the experimental results in Table 1. When is equal to 0, CPP-Net reduces to the StarDist model [13]. As Table 1 shows, the performance of CPP-Net continues to improve as increases from 0 to 6; however, its performance saturates when exceeds 6. Therefore, we consistently set to 6 in the following experiments.

It is clear that a single sampling point alone is able to significantly boost the APs on both databases, especially for APs under high IoU thresholds. Moreover, when is equal to 6, CEM improves the mean APs by on the DSB2018 database and on the BBBC006 database. The above experiments justify the effectiveness of CEM.

Dataset Mean
DSB2018 0 0.8731 0.8481 0.8220 0.7849 0.7368 0.6591 0.5709 0.4401 0.2566 0.6657
1 0.8762 0.8568 0.8332 0.8042 0.7608 0.6968 0.6057 0.4805 0.3264 0.6934
2 0.8758 0.8538 0.8310 0.8037 0.7585 0.6947 0.6128 0.4918 0.3407 0.6959
3 0.8784 0.8555 0.8357 0.8027 0.7681 0.6955 0.6076 0.4872 0.3309 0.6957
4 0.8753 0.8508 0.8317 0.7995 0.7606 0.6950 0.6128 0.4887 0.3530 0.6964
5 0.8742 0.8566 0.8359 0.8024 0.7618 0.6983 0.6198 0.4921 0.3461 0.6986
6 0.8801 0.8576 0.8352 0.8021 0.7631 0.7024 0.6185 0.4974 0.3445 0.7001
7 0.8770 0.8550 0.8293 0.8031 0.7649 0.7033 0.6187 0.4997 0.3488 0.7000
BBBC006 0 0.8405 0.8167 0.7895 0.7517 0.7025 0.6396 0.5637 0.4834 0.4038 0.6657
1 0.8417 0.8179 0.7894 0.7522 0.7040 0.6396 0.5670 0.4919 0.4306 0.6705
2 0.8404 0.8167 0.7900 0.7535 0.7059 0.6436 0.5740 0.4989 0.4368 0.6733
3 0.8414 0.8174 0.7893 0.7538 0.7067 0.6456 0.5729 0.4980 0.4353 0.6734
4 0.8425 0.8193 0.7915 0.7553 0.7095 0.6480 0.5749 0.5003 0.4376 0.6754
5 0.8431 0.8199 0.7919 0.7548 0.7075 0.6462 0.5737 0.5000 0.4388 0.6751
6 0.8411 0.8173 0.7899 0.7558 0.7094 0.6491 0.5772 0.5022 0.4389 0.6757
7 0.8421 0.8178 0.7898 0.7518 0.7016 0.6395 0.5664 0.4937 0.4351 0.6709
Table 1: Ablation study on numbers of sampling points in CEM.

5.2 Evaluation of CWM

The results of the ablation study on the CWM module are summarized in Table 2. In this table, ‘baseline’ refers to the StarDist model [13], i.e., setting in CPP-Net to 0. In addition to CWM, another two weighting strategies are evaluated. ‘Equal weights’ denotes the averaging weighting strategy for Eq. (1), while ‘Naïve attention’ represents learning fixed weights for the points in Eq. (1), using a trainable vector with elements.

It is shown that CEM consistently outperforms the baseline model by large margins, regardless of the specific weighting strategy in Eq. (1). Moreover, compared with the other two weighting strategies, CWM achieves the best mean AP performance. CWM’s advantage lies mainly in its APs under high IoU thresholds, which indicates that the instance segmentation accuracy is increased. This performance improvement can be ascribed to the superior flexibility of CWM. In short, unlike the two weighting strategies that adopt fixed weights, CWM can adaptively weigh each sampled point according to the quality of its features. The above experimental results justify the effectiveness of CWM.

Dataset Method Mean
DSB2018 baseline 0.8731 0.8481 0.8220 0.7849 0.7368 0.6591 0.5709 0.4401 0.2566 0.6657
equal weights 0.8758 0.8589 0.8305 0.8023 0.7597 0.6934 0.6102 0.4848 0.3255 0.6935
naïve attention 0.8758 0.8585 0.8364 0.8042 0.7612 0.6999 0.6170 0.4923 0.3330 0.6976
CWM 0.8801 0.8576 0.8352 0.8021 0.7631 0.7024 0.6185 0.4974 0.3445 0.7001
BBBC006 baseline 0.8405 0.8167 0.7895 0.7517 0.7025 0.6396 0.5637 0.4834 0.4038 0.6657
equal weights 0.8462 0.8217 0.7928 0.7556 0.7070 0.6437 0.5678 0.4912 0.4268 0.6725
naïve attention 0.8443 0.8205 0.7925 0.7567 0.7076 0.6467 0.5720 0.4957 0.4298 0.6740
CWM 0.8411 0.8173 0.7899 0.7558 0.7094 0.6491 0.5772 0.5022 0.4389 0.6757
Table 2: Ablation study investigating different weighting strategies in CEM.

5.3 Evaluation of the SAP Loss

In this experiment, we justify the effectiveness of the SAP loss. Utilizing the SAP loss requires pre-training an encoder-decoder model that transforms the instance representations in CPP-Net to other types of representations (as described in Section 3.4). Accordingly, we evaluate the following three types of representation strategies for the SAP loss. The first strategy is boundary-based, in that it predicts both semantic segmentation masks and instance boundaries [3, 7, 8]; the second strategy is bounding box-based, in that it regresses both the coordinates of nucleus centroids and bounding box positions for each pixel inside one instance [5]. The third strategy predicts both the above mentioned representations. For simplicity, these three strategies are denoted as ‘seg & bnd’, ‘bbox’, and ‘both’ in Table 3.

In Table 3, we first show the performance of CPP-Net without using the SAP loss. On both datasets, the SAP loss promotes performance in terms of mean AP. Specifically, the SAP loss improves the mean AP by on DSB2018 and on BBBC006. Furthermore, it is also clear that the improvement is mainly from APs under high IoU thresholds: for example, , , , , and improvements on on DSB2018. For APs with lower IoU values, SAP loss does not introduce significant performance promotion. From this phenomenon, we can conclude that the SAP loss primarily penalizes the prediction error in nucleus shape, rather than the localization or detection errors.

We also train the CPP-Net with another variant of the SAP loss, in which the encoder-decoder model is trained to reconstruct its input representations, i.e., the ground-truth centroid probability and pixel-to-boundary distance maps. The results of CPP-Net trained with this variant are denoted as ‘recons.’ in Table 3. The results show that the proposed SAP loss achieves better performance than this variant. The advantage achieved by our proposed SAP loss can be attributed to the transformation between different representation strategies. Through the use of this transformation task, the encoder-decoder model is forced to extract essential information related to the nucleus shape. By contrast, the ‘recons.’ variant is likely to only memorize the input information. Accordingly, our proposed SAP loss achieves better overall performance than all other three variants. In the following, we adopt our proposed SAP loss to train CPP-Net.

Dataset
SAP loss
Mean
DSB2018
-
0.8801 0.8576 0.8352 0.8021 0.7631 0.7024 0.6185 0.4974 0.3445 0.7001
seg & bnd
0.8770 0.8598 0.8382 0.8103 0.7691 0.7067 0.6239 0.5040 0.3494 0.7043
bbox
0.8791 0.8587 0.8356 0.8087 0.7686 0.7066 0.6188 0.4994 0.3440 0.7022
both
0.8760 0.8554 0.8385 0.8141 0.7770 0.7147 0.6326 0.5142 0.3550 0.7086
recons.
0.8734 0.8525 0.8312 0.8045 0.7686 0.6996 0.6259 0.5074 0.3603 0.7026
BBBC006
-
0.8411 0.8173 0.7899 0.7558 0.7094 0.6491 0.5772 0.5022 0.4389 0.6757
seg & bnd
0.8472 0.8215 0.7933 0.7571 0.7125 0.6495 0.5782 0.5051 0.4436 0.6787
bbox
0.8459 0.8205 0.7934 0.7592 0.7100 0.6487 0.5770 0.5035 0.4383 0.6774
both
0.8448 0.8207 0.7962 0.7619 0.7150 0.6560 0.5831 0.5060 0.4398 0.6804
recons.
0.8447 0.8199 0.7926 0.7542 0.7078 0.6459 0.5749 0.5008 0.4384 0.6755
Table 3: Ablation study investigating the Shape-Aware Perceptual (SAP) loss.
Figure 4: Qualitative comparisons between StarDist, CPP-Net (w/o SAP loss), and CPP-Net trained with SAP loss. The five columns from left to right are the original images in DSB2018 (a), the ground truth segmentation results (b), and predictions by each of the three methods (c-e). Best viewed with zoom-in.

5.4 Qualitative Comparisons

In this experiment, we conduct qualitative comparisons between StarDist [13], CPP-Net (w/o SAP loss), and CPP-Net trained with SAP Loss, the results of which are presented in Fig. 4. As is shown in the first and second rows, StarDist may mistakenly segment a single nucleus instance into multiple nuclei; for its part, CPP-Net achieves more robust segmentation. Results in the third and fourth rows further indicate that the predictions of CPP-Net are more accurate regarding instance boundaries (e.g., the concave areas along nucleus boundaries). This can be attributed to CEM’s ability to explore more contextual information for centroid-to-boundary distance prediction. Finally, the SAP loss further corrects nucleus shape prediction errors, e.g., the highlighted instances in the lower-left and upper-right corners of the first example image. The above qualitative comparisons justify the effectiveness of the CEM module and SAP loss, respectively.

5.5 Comparisons with State-of-the-Art Methods

Dataset Methods Mean
DSB2018 Mask R-CNN [2] 0.8323 0.8051 0.7728 0.7299 0.6838 0.5974 0.4893 0.3525 0.1891 0.6058
StarDist [13] 0.8641 0.8361 0.8043 0.7545 0.6850 0.5862 0.4495 0.2865 0.1191 0.5983
KeypointGraph* [19] 0.8244 0.8142 0.7916 0.7557 0.7083 0.6600 0.5799 0.4721 0.2989 0.6561
HoVer-Net* [4] 0.7838 0.7676 0.7547 0.7391 0.7165 0.6668 0.6135 0.5102 0.3978 0.6611
PatchPerPix [17] 0.8680 0.8480 0.8270 0.7950 0.7550 0.7160 0.6350 0.5180 0.3790 0.7046
StarDist* [13] 0.8731 0.8481 0.8220 0.7849 0.7368 0.6591 0.5709 0.4401 0.2566 0.6657
CPP-Net* 0.8760 0.8554 0.8385 0.8141 0.7770 0.7147 0.6326 0.5142 0.3550 0.7086
BBBC006 InstanceEmbedding* [20] 0.6277 0.5929 0.5572 0.5133 0.4670 0.4242 0.3815 0.2264 0.0130 0.4226
KeypointGraph* [19] 0.6115 0.5787 0.5425 0.5080 0.4737 0.4335 0.3611 0.1778 0.0173 0.4116
HoVer-Net* [4] 0.8146 0.7896 0.7627 0.7321 0.6870 0.6274 0.5561 0.4827 0.4284 0.6534
StarDist* [13] 0.8405 0.8167 0.7895 0.7517 0.7025 0.6396 0.5637 0.4834 0.4038 0.6657
CPP-Net* 0.8448 0.8207 0.7962 0.7619 0.7150 0.6560 0.5831 0.5060 0.4398 0.6804
Table 4: Comparisons with SOTA methods on DSB2018 and BBBC006. * denotes methods evaluated by ourselves.
Tissue Mask R-CNN Micro-Net HoVer-Net StarDist* CPP-Net*
StarDist*
with ResNet50
CPP-Net*
with ResNet50
mPQ bPQ mPQ bPQ mPQ bPQ mPQ bPQ mPQ bPQ mPQ bPQ mPQ bPQ
Adrenal Gland 0.3470 0.5546 0.4153 0.6440 0.4812 0.6962 0.4855 0.6764 0.4799 0.6913 0.4868 0.6972 0.4922 0.7031
Bile Duct 0.3536 0.5567 0.4124 0.6232 0.4714 0.6696 0.4492 0.6417 0.4518 0.6569 0.4651 0.6690 0.4650 0.6739
Bladder 0.5065 0.6049 0.5357 0.6488 0.5792 0.7031 0.5718 0.6798 0.5887 0.6847 0.5793 0.6986 0.5932 0.7057
Breast 0.3882 0.5574 0.4407 0.6029 0.4902 0.6470 0.4946 0.6507 0.5031 0.6610 0.5064 0.6666 0.5066 0.6718
Cervix 0.3402 0.5483 0.3795 0.6101 0.4438 0.6652 0.4544 0.6659 0.4580 0.6718 0.4628 0.6690 0.4779 0.6880
Colon 0.3122 0.4603 0.3414 0.4972 0.4095 0.5575 0.4009 0.5534 0.4102 0.5646 0.4205 0.5779 0.4296 0.5888
Esophagus 0.4311 0.5691 0.4668 0.6011 0.5085 0.6427 0.5206 0.6465 0.5266 0.6554 0.5331 0.6655 0.5410 0.6755
Head & Neck 0.3946 0.5457 0.3668 0.5242 0.4530 0.6331 0.4613 0.6331 0.4596 0.6244 0.4768 0.6433 0.4667 0.6468
Kidney 0.3553 0.5092 0.4165 0.6321 0.4424 0.6836 0.4902 0.6802 0.4736 0.6889 0.4880 0.6998 0.5092 0.7001
Liver 0.4103 0.6085 0.4365 0.6666 0.4974 0.7248 0.4891 0.7007 0.4941 0.7144 0.5145 0.7231 0.5099 0.7271
Lung 0.3182 0.5134 0.3370 0.5588 0.4004 0.6302 0.4032 0.6165 0.4061 0.6247 0.4128 0.6362 0.4234 0.6364
Ovarian 0.4337 0.5784 0.4387 0.6013 0.4863 0.6309 0.5170 0.6499 0.5197 0.6709 0.5205 0.6668 0.5276 0.6792
Pancreatic 0.3624 0.5460 0.4041 0.6074 0.4600 0.6491 0.4410 0.6331 0.4789 0.6540 0.4585 0.6601 0.4680 0.6742
Prostate 0.3959 0.5789 0.4341 0.6049 0.5101 0.6615 0.4998 0.6473 0.5098 0.6674 0.5067 0.6748 0.5261 0.6903
Skin 0.2665 0.5021 0.3223 0.5817 0.3429 0.6234 0.3537 0.6063 0.3399 0.6042 0.3610 0.6289 0.3547 0.6192
Stomach 0.3684 0.5976 0.3872 0.6293 0.4726 0.6886 0.4191 0.6636 0.4365 0.6939 0.4477 0.6944 0.4553 0.7043
Testis 0.3512 0.5420 0.4088 0.6300 0.4754 0.6890 0.4767 0.6661 0.4903 0.6787 0.4942 0.6869 0.4917 0.7006
Thyroid 0.3037 0.5712 0.3712 0.6555 0.4315 0.6983 0.4166 0.6807 0.4431 0.7054 0.4300 0.6962 0.4344 0.7094
Uterus 0.3683 0.5589 0.3965 0.5821 0.4393 0.6393 0.4428 0.6305 0.4610 0.6443 0.4480 0.6599 0.4790 0.6622
Average across tissues 0.3688 0.5528 0.4059 0.6053 0.4629 0.6596 0.4625 0.6485 0.4700 0.6609 0.4744 0.6692 0.4817 0.6767
STD across splits 0.0047 0.0076 0.0082 0.0050 0.00760 0.0036 0.0078 0.0054 0.0082 0.0062 0.0037 0.0014 0.0057 0.0018
Table 5: Comparisons with SOTA methods on the PanNuke database. * denotes methods evaluated by ourselves.

5.5.1 Comparisons on the DSB2018 database

We compare the performance of CPP-Net with Mask-RCNN[13, 2], KeypointGraph[19], HoVer-Net[4], PatchPerPix[17], and StarDist[13]. The results of this comparison are tabulated in Table 4. It is notable here that some above-mentioned methods were evaluated using different training and testing data split protocols in their respective papers. In the interests of fair comparison, we evaluate the performance of Hover-Net [4] and KeypointGraph [19] by ourselves using codes released by the authors, under the same evaluation protocol as [13, 17]. We also reimplement the StarDist approach on DSB2018 and replace its BN layers with GN layers. Accordingly, we achieve better performances than the results reported in [13].

As shown in Table 4, StarDist and PatchPerPix are two powerful approaches and have their own respective advantages. Specifically, StarDist achieves higher than PatchPerPix, but much lower APs under high IoU thresholds. We conjecture StarDist may be affected by prediction accuracy regarding the shape of nucleus boundaries. This is because StarDist adopts the features of centroid pixels only for shape prediction; however, the centroid pixel alone lacks contextual information. In comparison, CPP-Net consistently achieves better performance than StarDist; in particular, it significantly improves the performance at high IoU thresholds. Finally, CPP-Net achieves the best mean AP performance among all methods. The above comparison experiments justify the effectiveness of CPP-Net.

We further summarize the inference time of different models in Table 6. Here, inference time includes the network prediction time and the associated post-processing time. We compare the inference time under the same hardware conditions: one NVIDIA TITAN Xp GPU, Intel(R) Core(TM) i7-6850K CPU @3.60GHz, and 128GB RAM. As shown in Table 6, StarDist [13] is the fastest among all compared approaches, while CPP-Net increases costs by only around relative to StarDist. Compared with other approaches presented in Table 6, CPP-Net and StarDist are more efficient owing to their light-weight backbone and their simple post-processing operations.

Methods Average Inference Time (second per image)
KeypointGraph [19] 0.8556
HoVer-Net [4] 1.5556
PatchPerPix [17] 5.8767
StarDist [13] 0.2327
CPP-Net 0.2519
Table 6: Average inference time on the DSB2018 database.

5.5.2 Comparisons on the BBBC006 database

To facilitate fair comparison, we train StarDist [13], HoVer-Net [4], KeypointGraph [19], and InstanceEmbedding [20] using the same data split protocol as ours. Experimental results are summarized in Table 4. As the table shows, similar to the results on DSB2018, the StarDist model achieves a promising score but an unsatisfactory score. By contrast, the proposed CPP-Net promotes the nucleus segmentation performance and maintains its advantages in terms of nucleus detection. It also continues to outperform all other state-of-the-art methods. Experimental results on this database justify the effectiveness of CPP-Net. Moreover, it is worth noting that BBBC006 consists of two types of images, specifically Hoechst images and phalloidin images. The latter image type contains a significant amount of noise, which affects the performance of KeypointGraph [19] and InstanceEmbedding [20]. In comparison, StarDist, CPP-Net, and HoVer-Net continues to achieve promising results, which shows their robustness when processing noisy images.

5.5.3 Comparisons on the PanNuke database

We provide the performance of StarDist and CPP-Net with two different backbones. The first backbone adopts the same encoder as that used in the DSB2018 database, while the second employs ResNet-50 as the encoder. Their performance is compared with that of Mask-RCNN [2], Micro-Net [15], and HoVer-Net [4] in Tables 5. We further adopt the same evaluation metrics as those in [34]. In Table 5, both bPQ and mPQ are computed for each of the 19 tissues.

As the experimental results in Table 5 demonstrate, CPP-Net consistently outperforms StarDist using each of the two backbones. Moreover, when CPP-Net is equipped with the same ResNet-50 backbone as HoVer-Net, it achieves better average performance than all other methods: for example, it outperforms StarDist by and in mPQ and bPQ, respectively. Results of the above comparisons are consistently with those on the first two databases, which further justifies the effectiveness of CPP-Net.

6 conclusion

In this paper, we improve the performance of StarDist from two aspects. First, we propose a Context Enhancement Module that enables us to explore more contextual information and accordingly predict the centroid-to-boundary distances more robustly, especially for large-sized nuclei. We further propose a Confidence-based Weighting Module that adaptively fuses the predictions of the sampled points in the CEM module. Second, we propose a Shape-Aware Perceptual loss, which constrains the high-level shape information contained in the centroid probability and pixel-to-boundary distance maps. We conduct extensive ablation studies to justify the effectiveness of each proposed component. Finally, our proposed CPP-Net model is found to significantly outperform the StarDist model and achieve state-of-the-art performance on three popular datasets for nucleus segmentation.

References

  • [1] J.C. Caicedo et al., “Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl,” Nat. Methods, vol. 16, no. 12, pp. 1247-1253, Oct. 2019.
  • [2] K. He, G. Gkioxari, P. Dollàr, and R. Girshick, “Mask R-CNN,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 386-397, 2020.
  • [3] H. Chen, X. Qi, L. Yu, and P.A. Heng,“DCAN: Deep contour-aware Nnetworks for accurate gland segmentation,” in

    Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)

    , Jun. 2016, pp. 2487-2496.
  • [4] S. Graham et al., “HoVer-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology,” Med. Image Anal., vol. 58, p. 101563, Dec. 2019.
  • [5] N.A. Koohbanani, M. Jahanifar, A. Gooya, and N. Rajpoot,“Nuclear instance segmentation using a proposal-free spatially aware deep learning framework,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Oct. 2019, pp. 622-630.
  • [6] N. Kumar, R. Verma, S. Sharma, S. Bhargava, A. Vahadane and A. Sethi, “A dataset and a technique for generalized nuclear segmentation for computational pathology,” IEEE Trans. Med. Imag., vol. 36, no. 7, pp. 1550-1560, Jul. 2017.
  • [7] H. OdaEmail et al., “BESNET: Boundary-enhanced segmentation of cells in histopathological images,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Sep. 2018, pp. 228-236.
  • [8] Y. Zhou, O.F. Onder, Q. Dou, E. Tsougenis, H. Chen, and P.A. Heng, “CIA-net: Robust nuclei instance segmentation with contour-aware information aggregation,” in Proc. IPMI, 2019, pp. 682-693.
  • [9] B. Zhao et al., “Triple U-net: Hematoxylin-aware nuclei segmentation with progressive dense feature aggregation,” Med. Image Anal., vol. 65, p. 101786, Oct. 2020.
  • [10] M. W. Lafarge, E. J. Bekkers, J. P.W. Pluim, R. Duits, and M. Veta, “Roto-translation equivariant convolutional networks: Application to histopathology image analysis,” Med. Image Anal., vol. 68, p. 101849, Feb. 2021.
  • [11] P. Naylor, M. Laé, F. Reyal and T. Walter “Segmentation of nuclei in histopathology images by deep regression of the distance map,” in IEEE Trans. Med. Imag., vol. 38, no. 2, pp. 448-459, Feb. 2019.
  • [12] S. Wolf et al., “The mutex watershed algorithm for efficient segmentation without seeds,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 2018, pp. 546-562.
  • [13] U. Schmidt, M. Weigert, C. Broaddus, and G. Myers, “Cell detection with star-convex polygons,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Sep, 2018, pp. 265-273.
  • [14] F. C. Walter, S. Damrich, and F. A. Hamprecht, “MultiStar: Instance segmentation of overlapping bbjects with star-convex polygons,” arXiv:2011.13228, 2020.
  • [15] S.E.A. Raza et al., “Micro-Net: A unified model for segmentation of various objects in microscopy images,” Med. Image Anal., vol. 52, pp. 160-173, Feb. 2019.
  • [16] S. Chen, C. Ding, and D. Tao, “Boundary-assisted region proposal networks for nucleus segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Sep. 2020, pp. 279-288.
  • [17] P. Hirsch, L. Mais, D. Kainmueller, “PatchPerPix for instance segmentation,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 2018, pp. 546-562.
  • [18] A. O. Vuola, S. U. Akram, and J. Kannala “Mask-RCNN and U-Net ensembled for nuclei segmentation,” in Proc. IEEE Int. Symp. Biomed. Imag. (ISBI), Apr. 2019, pp. 208-212.
  • [19] J. Yi et al., “Multi-scale cell instance segmentation with keypoint graph based bounding boxes,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Oct. 2019, pp. 369-377.
  • [20] C. Long, M. Strauch, and D. Merhof, “Instance segmentation of biomedical images with an object-aware embedding learned with local constraints,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Oct. 2019, pp.451-459.
  • [21] N. Dietler et al.,

    “A convolutional neural network segments yeast microscopy images with high accuracy,”

    Nat. Commun., vol. 11, no. 1, p. 5723, 2020.
  • [22] N. Malpica et al., “Applying watershed algorithms to the segmentation of clustered nuclei,” Cytometry, vol. 28, pp. 289-297, 1997.
  • [23]

    X. Yang, H. Li and X. Zhou, “Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and Kalman filter in time-lapse microscopy,”

    IEEE Trans. Circuits Syst. I, Reg Papers, vol. 53, no. 11, pp. 2405-2414, Nov. 2006.
  • [24] A. Tareef et al., “Multi-pass fast watershed for accurate segmentation of overlapping cervical cells,” IEEE Trans. Med. Imag., vol. 37, no. 9, pp. 2044-2059, Sep. 2018.
  • [25] P. Bamford and B. Lovell, “Unsupervised cell nucleus segmentation with active contours,” Signal Process., vol. 71, no. 2, pp. 203-213, 1998.
  • [26] C. Molna et al., “Accurate morphology preserving segmentation of overlapping cells based on active contours,” Sci. Rep., vol. 6, p. 32412, 2016.
  • [27] Z. Lu, G. Carneiro and A. P. Bradley, “An improved joint optimization of multiple level set functions for the segmentation of overlapping cervical cells,” IEEE Trans. Image Process, vol. 24, no. 4, pp. 1261-1272, Apr. 2015.
  • [28]

    C. Chen, W. Wang, J. A. Ozolek and G. K. Rohde, “A flexible and robust approach for segmenting cell nuclei from 2d microscopy images using supervised learning and template matching,”

    Cytometry A, vol. 83A, no. 5, pp. 495-507, 2013.
  • [29] E. Xie et al., “PolarMask: Single shot instance segmentation with polar representation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 12193-12202.
  • [30]

    F. Wei, X. Sun, H. Li, J. Wang, S. Lin, “Point-set anchors for object detection, instance segmentation and pose estimation,” in

    Proc. Eur. Conf. Comput. Vis. (ECCV), Aug. 2020, pp. 527-544.
  • [31] Y. Meng et al., “CNN-GCN Aggregation Enabled Boundary Regression for Biomedical Image Segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. (MICCAI), Sep. 2020, pp. 352-362.
  • [32] J. Johnson, A. Alahi, L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2016, pp. 694-711.
  • [33] V. Ljosa, K.L. Sokolnicki, and A.E. Carpenter, 2012. “Annotated high-throughput microscopy image sets for validation,” Nat. Methods, vol. 9, no. 7, pp. 637-637, Jun. 2012.
  • [34] J. Gamper, N.A. Koohbanani, K. Benet, A. Khuram, and N. Rajpoot, “PanNuke: An open pan-cancer histology dataset for nuclei instance segmentation and classification,” in Proc. Eur. Congr. Digit. Pathol. (ECDP), 2019, pp. 11-19.
  • [35] J. Gamper et al., “PanNuke dataset extension, insights and baselines,” arXiv:2003.10778, 2020.
  • [36] A. Y. Ng et al.,

    “On spectral clustering: Analysis and an algorithm,” in

    Proc. Advances in Neural Information Processing Systems (NeurIPS), 2002, pp. 849-856.
  • [37] J. Deng et al.,“ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 248-255.
  • [38] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn. (ICML), Feb. 2015, pp. 448-456.
  • [39] Y. Wu and K. He, “Group normalization,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 201, pp. 3-198.
  • [40] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2016, pp. 770-778.
  • [41]

    F. Milletari, N. Navab, and S.-A. Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” in

    Proc. Int. Conf. 3D Vis., Oct. 2016, pp. 565-571.
  • [42] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Representations (ICLR), 2015, pp. 1-15.