Multiclass Weighted Loss for Instance Segmentation of Cluttered Cells

02/21/2018 ∙ by Fidel A. Guerrero Peña, et al. ∙ UFPE California Institute of Technology 0

We propose a new multiclass weighted loss function for instance segmentation of cluttered cells. We are primarily motivated by the need of developmental biologists to quantify and model the behavior of blood T-cells which might help us in understanding their regulation mechanisms and ultimately help researchers in their quest for developing an effective immuno-therapy cancer treatment. Segmenting individual touching cells in cluttered regions is challenging as the feature distribution on shared borders and cell foreground are similar thus difficulting discriminating pixels into proper classes. We present two novel weight maps applied to the weighted cross entropy loss function which take into account both class imbalance and cell geometry. Binary ground truth training data is augmented so the learning model can handle not only foreground and background but also a third touching class. This framework allows training using U-Net. Experiments with our formulations have shown superior results when compared to other similar schemes, outperforming binary class models with significant improvement of boundary adequacy and instance detection. We validate our results on manually annotated microscope images of T-cells.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It is not fully understood how blood stem cells differentiate over time to generate all blood cell types in the body nor what are the mechanisms that drive their specialization. T–cells are descendants of blood stem cells with an important role in emerging immunotherapy cancer treatments [1]. We are particularly interested in determining how decisions are made by individual progenitor T–cells under controlled environmental conditions [2]. To carry out experiments, individual T–cells are isolated in microwells where they grow and proliferate for five or six days. Multiple cell divisions occur in each microwell leading to a dense cell population originated from a single cell. Multichannel images are acquired at intervals to follow cell development which can then be quantified by analyzing fluorescent signals expressing specific markers of differentiation. Segmenting individual cells is necessary to measure signal activation per cell and to count how many cells are active over time (see Fig.1).

The difficulties are in segmenting adjoining cells. These can take any shape, when cluttered or isolated, and their touching borders have nonuniform brightness and patterns defeating classical segmentation approaches. Weak boundaries are also troubling (see Fig.1 and also Fig.5

). Furthermore, the total pixel count on adjoining borders is considerably smaller than the pixel count for the entire image which contributes to numerical optimization difficulties when training a neural network with imbalanced data

[3] and without a properly calibrated loss function. The situation is exacerbated in large clusters where cells might overlap making it difficult, even for the trained eye, to locate cell contours. We approach these difficulties by adopting a loss function with pixel-wise weights, following [4], that take into account not only the location and length of touching borders but also the geometry of cell contours.

Figure 1: We show in (A) cells marked by the mTomato fluorophore. Their corresponding signal of interest, CD25, which changes over time, is expressed in some cells (B). Our goal is to segment individual cells, as shown in (C), and colocalize CD25 to measure its concentration within each cell (D) and consequently count how many cells are active at any given time. In this illustration, the top two cells are fully active as reported by their high CD25 content. Colored masks in (C) are for illustration purpose only. A typical cluttering of T–cells is presented on panel E, which shows the maximum intensity projection of a few slices of a widefield stack.


The problem of segmenting cells with difficult boundaries has been addressed by others. Long et al. [5]

proposed a Fully Convolutional Network (FCN) which improved the image-level classification in a Convolutional Neural Network (CNN) to a pixel-level classification. This allowed segmentation maps to be generated for images of any size and it was much faster compared to the then prevalent patch classification approach. In the same year, Ronneberger

et al. [4] introduced U-Net to segment biomedical images, a FCN encoder-decoder type of architecture, together with a weighted cross entropy loss function. This network was a breakthrough, achieving remarkable results in segmenting biomedical images, from cells to organs. We have opted to use U-Net due to its proven success but we employ different per pixel weights in the loss function.

Browet et al. [6]

, working with mouse embryo cells, estimated pixel probabilities for cell interior, borders, and background – in line with our multiclass approach – and then minimized an energy cost function to match the class probabilities via graph–cuts. We favored to avoid the pitfalls of graph-cuts and the thresholding adopted in their formulation to define seeds within cells. Chen

et al. [7] proposed DCAN, a contour aware FCN to segment glands from histology images towards improving the automatic diagnosis of adenocarcinomas. They also modeled a loss with contours which led them to win the 2015 MICCAI Gland Segmentation Challenge [8] confirming the advantages of explicitly learning contours. Recently, Xu et al. [9] proposed a three branch network to segment individual glands in colon histology images.

Mask R-CNN [10]

is considered to be the state of the art in instance segmentation for natural images. It classifies object bounding boxes using Faster R-CNN

[11] and then applies FCN inside each box to segment a single object therein. Natural images, contrary to single channel, low entropy cell microscope images, are much richer in information. Natural objects in general differ to a great extent, making discrimination comparatively easier. Nevertheless, we plan to experiment and adapt Mask R-CNN to our cell images after sufficient training data has been collected and annotated.

Notation and definitions. We are given a training set , with cardinality , where , is a gray-level image and its binary ground truth segmentation. Let be a generic tuple from . We call and , respectively, the background and foreground subsets of , and more generally , where returns the class assigned to pixel , . We write the pixel indicator function simply as , i.e. if , otherwise . The connected components of , , are the non-empty masks for all trainable cells in . For a mask , represents its contour and gives its convex hull, also written as . We say is the set of all contour pixels in . A mask admits a skeleton , also written as , which is its medial-axis. The distance transform assigns to every pixel of the Euclidean distance to the closest non-background pixel. Touching cells in an image share a common boundary (see e.g. Fig.1), which, by construction, is a one pixel wide background gap separating their respective connected components in (see figure in Algorithm 1).

2 Multiclass and focus weights

We propose higher weights to alleviate the imbalance of classes in the training data and to emphasize cell contours, specially at touching borders, while maintaining lower weights for the abundant, more homogeneous, easily separable background pixels. However, it is also critical that background pixels around cell contours should carry proportionally higher weights as they help capturing cell borders more accurately specially in acute concave regions.

Some authors, e.g. [4, 9], consider the one pixel wide gaps in separating connected components to be part of the background but with larger weights. By doing so, one might diminish the discriminative power of the network as the foreground and background intensity distributions overlap to some extent causing separation of pixels more difficult, as suggested by the histograms shown in Fig.2. There one can notice the difference between the signatures of touching borders, cell interiors, and background. If touching pixels are considered background pixels for the purpose of training the network with only two classes, the distance between the classes, foreground and background, would not be as pronounced as if we have three separate classes. This way, background is far off the other two classes leaving interior and touching regions to be resolved, which is helped with proper shape-aware weights. We believe, and show experimentally, that by considering a multiclass learning approach we enhance the discriminative resolution of the network and hence obtain a more accurate segmentation of individual cells.

The goal of training our FCN network is to obtain a segmentation map as close as possible to , , given image . When is evaluated by a FCN a probability map is obtained such that reports the probabilities of pixel belonging to each class. The binary can be obtained from applying a decision rule, like the maximum a posteriori or, in our case, Algorithm2.

Figure 2: The distinct intensity and structural signatures of the three predominant regions – background (A), cell interior (B), in-between cells (C) – found in our images are shown above. Shown in panel (D) are the combined histogram curves for comparison. This distinction led us to adopt a multiclass learning approach which helped resolve the narrow bright boundaries separating touching cells, as seen in (C).


2.1 Class augmentation

We perform label augmentation on the binary to create a third class corresponding to touching borders. This is done using morphological operations (Algorithm 1). By design, this new class occupies a slightly thicker region than the original gap between cells. Training can now be done using an augmented and the resulting map will have an extra class representing the distribution for touching pixels.

Algorithm 1 Augment ground truth 1:procedure labelAugment(,) 2:      3:      4:      5:      dilation erosion
Mapping binary to three classes ground truth is done using morphological operations. We use a square structuring element for . Images are inverted for illustration purpose.


2.2 Focus weights

The weighted cross entropy loss function [4] is used to focus learning on important but underrepresented parts of an image:



is the softmax function applied to vector

, is the logarithm function, is a known weight at parameterized by , is the class indicator function, is the unknown feature vector for pixel , and . We propose a distance transform based weight map (DWM)


where is a control parameter that decays the weight from the contour, and is the class imbalance weight [4], inversely proportional to the number of pixels in the class. Typically , but the weights hold regardless. Note that vanishes for , and for , we have a linear decay for background pixels . Non-background pixels () have class constant weights. Fig.4C shows for . It turns out that segmenting valid minutiae (e.g. cell tip in Fig.3A,B), usually in the form of high curvature and narrow regions, requires stronger weights. This led us to formulate a shape aware weight map to take into account small but important nuances around contours.

The concave complement of is . Let be a binary image with skeletons as foreground pixels, and the distance transform over . We call . Our shape aware weight map (SAW) is


where convolution with filter

, which combines copy padding and Gaussian smoothing, propagates

values from ,


to neighboring pixels. is a distance normalization factor. measures complexity for each by computing distances to the skeletons of the mask and of its complement to assess how narrow are the regions around the contours. The smallest distances give rise to larger weights. The value of governs the distance tolerance and it is application dependent. Note that SAW assigns large weights to small objects without any further processing or loss function change contrary to what has been proposed by Zhou et al. [12]. Examples of SAW for single and touching cells are shown in Fig.3 and comparatively for a cluster in Fig.4D.

Figure 3: SAW for single (B) and touching (D) cells. Contours are shown in red and concavities in cyan in panels (A) and (C). Color code is normalized to maximum weight value. Note the large weights (in red) on narrow and concave regions, which help the network learn to segment accurate contours.


2.3 Assigning touching pixels

The touching pixels in the network generated probability map need to be distributed to adjacent cells. We do this in Algorithm2 by assigning each pixel , for which it has been determined that , to its closest adjacent cell. The method uses map and two given thresholds and as decision rules to build the final binary segmentation : contains the segmented cell masks in background . The threshold is used to determine touching pixels and to determine cell masks: , and . All other pixels are background.

1:procedure instanceAssign(,,)
4:     for  do
Algorithm 2 Pixel class assignment
Figure 4: An example of cluttered cells is shown in panel (A). The weight maps, from left to right are the plain class balancing weight map (B), our proposed distance transform based Weight map (C), and our shape aware weight map (D). Color code is normalized to maximum weight value with reds representing higher weights and blue small weights.


3 Results

Figure 5: Results of instance segmentation obtained with UNET2, UNET3, FL, DWM, and SAW and ground truth delineations for eight regions of two images. Results are for the best combination of and thresholds. Note that the two cells on top right set cannot be resolved by any method. This might be due to weak signal on a very small edge or lack of sufficient training data. Our methods render close to optimal solutions for almost all training and testing data. We expect enhancements, for all methods, when larger training data is available. Contour colors are only to illustrate separation of cells.


We demonstrate our method in a manually curated T–cell segmentation dataset containing thirteen images of size 1024x1024. We augmented this data with warping and geometrical transformations (rotations, random crops, mirroring, and padding) in every training iteration. Ten images were used for the U-Net training [4]. We call UNET2 the use of U-Net with two classes and weights from [4]. The same model with label augmentation is referred as UNET3. DWM and SAW refer to training with U-Net network using the proposed and weights, respectively. We refer to FL as the focal loss work in [12]

which was applied in the segmentation of small objects using an adaptive weight map. We use its loss combined with label augmented ground-truth. All networks were equally initialized with the same normally distributed weights using the Xavier method

[13]. After training, binary segmentations are created using the pixel assignment algorithm described in section 2.3.

Radius 2 3 4 5 6 7
Training set
UNET2 0.7995 0.8762 0.8936 0.9053 0.9109 0.9137
UNET3 0.7997 0.8896 0.9087 0.9244 0.9320 0.9356
FL 0.7559 0.8557 0.8821 0.9007 0.9087 0.9125
DWM 0.8285 0.9139 0.9333 0.9484 0.9546 0.9578
SAW 0.8392 0.9183 0.9353 0.9485 0.9544 0.9573
Testing set
UNET2 0.6158 0.7116 0.7368 0.7627 0.7721 0.7828
UNET3 0.6529 0.7505 0.7770 0.8021 0.8158 0.8238
FL 0.5434 0.6566 0.6958 0.7263 0.7414 0.7505
DWM 0.6749 0.7847 0.8156 0.8398 0.8531 0.8604
SAW 0.7332 0.8298 0.8499 0.8699 0.8800 0.8860


Table 1: Results of the F1 score for different contour uncertainty radii. Our SAW method performed better than others, with DWM the second best on training data.

We adopted the F1 score to compare computed contours to ground truth. To allow small differences in the location of contours, an uncertainty radius , measured in pixels, is used for the F1 calculation, following [14]. Table 1 compares the results from different methods for several radii. For all radii our proposed methods outperform the other approaches. Better contour adequacy is obtained mainly with SAW for in the training set. In the testing phase, however, higher generalization can be observed with SAW for all radii. DWM was ranked second best. We will perform further tests with UNET3 to increase separability but the accuracy we have achieved so far, see Fig.6, suggests improvements will not surpass DWM or SAW.

Plots of F1 score for different radii and fields of view are shown in Fig.6 for all methods. We have experiemted with image sizes 1024x1024, 900x900, 500x500, and 250x250 corresponding to 1X, 1.1X, 2X and 4X fields of view. Objects look smaller to the network when the image size is reduced compromising instance segmentation. FL performed poorly when the field of view is increased. In all cases the best performances were obtained using SAW and DWM.

1X field of view 2X field of view
Figure 6: Top row. F1 scores for radii in 1X and 2X field of view size for each model. F1 values were consistently better for SAW and DWM. Bottom row

. On the let panel, we show for all training epochs of the SAW network the class weighted accuracy (blue) and the weighted cross entropy (red). In right panel we show the accuracy during training for all tested models, with outperforming rates for our DWM and SAW.


To measure instance detection, every recognized cell with Jaccard Index

[15] greater than is counted as True Positive. Contrary to the Intersection over Union (IoU) metric for detection [16] which uses bounding boxes, the Jaccard Index calculates the instance adequacy from object segmentation. Precision, Recall and F1 are calculated as described by Ozdemir et al.[17]. Table 2 shows the instance recognition metrics for all the approaches. The proposed methods outperform with high margin all the other methods when the number of recognized instances is taken into account. The SAW method showed an improvement of over DWM for the training set and an improvement of for the testing set. UNET2 behaved poorly in cluttered cells, unable to separate then. We speculate the combination of background and touching regions by UNET2 into a single class prevented the proper classification of pixels.

These encouraging results suggest that combining multiclass learning with pixel–wise shape aware weights might be advantageous to achieve improved segmentation results. We will perform further tests with UNET3 to increase separability but the accuracy we have achieved so far suggests minor improvements.

Training set
Precision 0.6506 0.7553 0.7276 0.8514 0.8218
Recall 0.4187 0.6457 0.4076 0.7191 0.8567
F1 metric 0.5096 0.6962 0.5225 0.7797 0.8389
Testing test
Precision 0.5546 0.7013 0.6076 0.7046 0.8113
Recall 0.2311 0.3717 0.2071 0.5195 0.6713
F1 metric 0.3262 0.4858 0.3089 0.5980 0.7347


Table 2: Instance detection for Jacard Index above 0.5 is much pronunciated for SAW meaning it can detect more cell instances than the other methods.

4 Conclusions

We proposed two new shape based weight maps which improved the effectiveness of the weighted cross entropy loss function in segmenting cluttered cells. We showed how learning with augmented labels for touching cels can benefit instance segmentation. Experiments demonstrated the superiority of the proposed approach when compared to other similar methods. In future work we will explore learning procedures that adapt weights in the critical contour regions and possibily improve results by training with more data.


  • [1] Steven A Rosenberg and Nicholas P Restifo, “Adoptive cell transfer as personalized immunotherapy for human cancer,” Science, vol. 348, no. 6230, pp. 62–68, 2015.
  • [2] Ellen V Rothenberg, Jonathan E Moore, and Mary A Yui, “Launching the T-cell-lineage developmental programme,” Nature Reviews Immunology, vol. 8, no. 1, pp. 9, 2008.
  • [3] Haibo He and Edwardo A Garcia, “Learning from imbalanced data,” IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263–1284, 2009.
  • [4] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
  • [5] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2015, pp. 3431–3440.
  • [6] Arnaud Browet, Christophe De Vleeschouwer, Laurent Jacques, Navrita Mathiah, Bechara Saykali, and Isabelle Migeotte, “Cell segmentation with random ferns and graph-cuts,” in Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4145–4149.
  • [7] Hao Chen, Xiaojuan Qi, Lequan Yu, and Pheng-Ann Heng, “DCAN: Deep contour-aware networks for accurate gland segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 2487–2496.
  • [8] Korsuk Sirinukunwattana, Josien PW Pluim, Hao Chen, and othres, “Gland segmentation in colon histology images: The glas challenge contest,” Medical image analysis, vol. 35, pp. 489–502, 2017.
  • [9] Yan Xu, Yang Li, Yipei Wang, Mingyuan Liu, Yubo Fan, Maode Lai, I Eric, and Chao Chang, “Gland instance segmentation using deep multichannel neural networks,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp. 2901–2912, 2017.
  • [10] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick, “Mask R-CNN,” in Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017, pp. 2980–2988.
  • [11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
  • [12] Xiao-Yun Zhou, Mali Shen, Celia Riga, Guang-Zhong Yang, and Su-Lin Lee, “Focal FCN: Towards biomedical small object segmentation with limited training data,” arXiv preprint arXiv:1711.01506, 2017.
  • [13] Xavier Glorot and Yoshua Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    , 2010, pp. 249–256.
  • [14] Francisco J Estrada and Allan D Jepson, “Benchmarking image segmentation algorithms,” International Journal of Computer Vision, vol. 85, no. 2, pp. 167–181, 2009.
  • [15] Gabriela Csurka, Diane Larlus, Florent Perronnin, and F Meylan, “What is a good evaluation measure for semantic segmentation?,” IEEE PAMI, vol. 26, pp. 1, 2004.
  • [16] Jan Hosang, Rodrigo Benenson, Piotr Dollár, and Bernt Schiele, “What makes for effective detection proposals?,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 814–830, 2016.
  • [17] Bahadır Özdemir, Selim Aksoy, Sandra Eckert, Martino Pesaresi, and Daniele Ehrlich, “Performance measures for object detection evaluation,” Pattern Recognition Letters, vol. 31, no. 10, pp. 1128–1137, 2010.
  • [18] Fidel A. Guerrero-Peña, Pedro D. Marrero Fernandez, Tsang Ing Ren, Mary Yui, Ellen Rothenberg, and Alexandre Cunha, “Multiclass weighted loss for instance segmentation of cluttered cells,” arXiv preprint, 2018.