It is not fully understood how blood stem cells differentiate over time to generate all blood cell types in the body nor what are the mechanisms that drive their specialization. T–cells are descendants of blood stem cells with an important role in emerging immunotherapy cancer treatments . We are particularly interested in determining how decisions are made by individual progenitor T–cells under controlled environmental conditions . To carry out experiments, individual T–cells are isolated in microwells where they grow and proliferate for five or six days. Multiple cell divisions occur in each microwell leading to a dense cell population originated from a single cell. Multichannel images are acquired at intervals to follow cell development which can then be quantified by analyzing fluorescent signals expressing specific markers of differentiation. Segmenting individual cells is necessary to measure signal activation per cell and to count how many cells are active over time (see Fig.1).
The difficulties are in segmenting adjoining cells. These can take any shape, when cluttered or isolated, and their touching borders have nonuniform brightness and patterns defeating classical segmentation approaches. Weak boundaries are also troubling (see Fig.1 and also Fig.5
). Furthermore, the total pixel count on adjoining borders is considerably smaller than the pixel count for the entire image which contributes to numerical optimization difficulties when training a neural network with imbalanced data and without a properly calibrated loss function. The situation is exacerbated in large clusters where cells might overlap making it difficult, even for the trained eye, to locate cell contours. We approach these difficulties by adopting a loss function with pixel-wise weights, following , that take into account not only the location and length of touching borders but also the geometry of cell contours.
The problem of segmenting cells with difficult boundaries has been addressed by others. Long et al. 
proposed a Fully Convolutional Network (FCN) which improved the image-level classification in a Convolutional Neural Network (CNN) to a pixel-level classification. This allowed segmentation maps to be generated for images of any size and it was much faster compared to the then prevalent patch classification approach. In the same year, Ronnebergeret al.  introduced U-Net to segment biomedical images, a FCN encoder-decoder type of architecture, together with a weighted cross entropy loss function. This network was a breakthrough, achieving remarkable results in segmenting biomedical images, from cells to organs. We have opted to use U-Net due to its proven success but we employ different per pixel weights in the loss function.
Browet et al. 
, working with mouse embryo cells, estimated pixel probabilities for cell interior, borders, and background – in line with our multiclass approach – and then minimized an energy cost function to match the class probabilities via graph–cuts. We favored to avoid the pitfalls of graph-cuts and the thresholding adopted in their formulation to define seeds within cells. Chenet al.  proposed DCAN, a contour aware FCN to segment glands from histology images towards improving the automatic diagnosis of adenocarcinomas. They also modeled a loss with contours which led them to win the 2015 MICCAI Gland Segmentation Challenge  confirming the advantages of explicitly learning contours. Recently, Xu et al.  proposed a three branch network to segment individual glands in colon histology images.
Mask R-CNN 
is considered to be the state of the art in instance segmentation for natural images. It classifies object bounding boxes using Faster R-CNN and then applies FCN inside each box to segment a single object therein. Natural images, contrary to single channel, low entropy cell microscope images, are much richer in information. Natural objects in general differ to a great extent, making discrimination comparatively easier. Nevertheless, we plan to experiment and adapt Mask R-CNN to our cell images after sufficient training data has been collected and annotated.
Notation and definitions. We are given a training set , with cardinality , where , is a gray-level image and its binary ground truth segmentation. Let be a generic tuple from . We call and , respectively, the background and foreground subsets of , and more generally , where returns the class assigned to pixel , . We write the pixel indicator function simply as , i.e. if , otherwise . The connected components of , , are the non-empty masks for all trainable cells in . For a mask , represents its contour and gives its convex hull, also written as . We say is the set of all contour pixels in . A mask admits a skeleton , also written as , which is its medial-axis. The distance transform assigns to every pixel of the Euclidean distance to the closest non-background pixel. Touching cells in an image share a common boundary (see e.g. Fig.1), which, by construction, is a one pixel wide background gap separating their respective connected components in (see figure in Algorithm 1).
2 Multiclass and focus weights
We propose higher weights to alleviate the imbalance of classes in the training data and to emphasize cell contours, specially at touching borders, while maintaining lower weights for the abundant, more homogeneous, easily separable background pixels. However, it is also critical that background pixels around cell contours should carry proportionally higher weights as they help capturing cell borders more accurately specially in acute concave regions.
Some authors, e.g. [4, 9], consider the one pixel wide gaps in separating connected components to be part of the background but with larger weights. By doing so, one might diminish the discriminative power of the network as the foreground and background intensity distributions overlap to some extent causing separation of pixels more difficult, as suggested by the histograms shown in Fig.2. There one can notice the difference between the signatures of touching borders, cell interiors, and background. If touching pixels are considered background pixels for the purpose of training the network with only two classes, the distance between the classes, foreground and background, would not be as pronounced as if we have three separate classes. This way, background is far off the other two classes leaving interior and touching regions to be resolved, which is helped with proper shape-aware weights. We believe, and show experimentally, that by considering a multiclass learning approach we enhance the discriminative resolution of the network and hence obtain a more accurate segmentation of individual cells.
The goal of training our FCN network is to obtain a segmentation map as close as possible to , , given image . When is evaluated by a FCN a probability map is obtained such that reports the probabilities of pixel belonging to each class. The binary can be obtained from applying a decision rule, like the maximum a posteriori or, in our case, Algorithm2.
2.1 Class augmentation
We perform label augmentation on the binary to create a third class corresponding to touching borders. This is done using morphological operations (Algorithm 1). By design, this new class occupies a slightly thicker region than the original gap between cells. Training can now be done using an augmented and the resulting map will have an extra class representing the distribution for touching pixels.
2.2 Focus weights
The weighted cross entropy loss function  is used to focus learning on important but underrepresented parts of an image:
is the softmax function applied to vector, is the logarithm function, is a known weight at parameterized by , is the class indicator function, is the unknown feature vector for pixel , and . We propose a distance transform based weight map (DWM)
where is a control parameter that decays the weight from the contour, and is the class imbalance weight , inversely proportional to the number of pixels in the class. Typically , but the weights hold regardless. Note that vanishes for , and for , we have a linear decay for background pixels . Non-background pixels () have class constant weights. Fig.4C shows for . It turns out that segmenting valid minutiae (e.g. cell tip in Fig.3A,B), usually in the form of high curvature and narrow regions, requires stronger weights. This led us to formulate a shape aware weight map to take into account small but important nuances around contours.
The concave complement of is . Let be a binary image with skeletons as foreground pixels, and the distance transform over . We call . Our shape aware weight map (SAW) is
where convolution with filter
, which combines copy padding and Gaussian smoothing, propagatesvalues from ,
to neighboring pixels. is a distance normalization factor. measures complexity for each by computing distances to the skeletons of the mask and of its complement to assess how narrow are the regions around the contours. The smallest distances give rise to larger weights. The value of governs the distance tolerance and it is application dependent. Note that SAW assigns large weights to small objects without any further processing or loss function change contrary to what has been proposed by Zhou et al. . Examples of SAW for single and touching cells are shown in Fig.3 and comparatively for a cluster in Fig.4D.
2.3 Assigning touching pixels
The touching pixels in the network generated probability map need to be distributed to adjacent cells. We do this in Algorithm2 by assigning each pixel , for which it has been determined that , to its closest adjacent cell. The method uses map and two given thresholds and as decision rules to build the final binary segmentation : contains the segmented cell masks in background . The threshold is used to determine touching pixels and to determine cell masks: , and . All other pixels are background.
We demonstrate our method in a manually curated T–cell segmentation dataset containing thirteen images of size 1024x1024. We augmented this data with warping and geometrical transformations (rotations, random crops, mirroring, and padding) in every training iteration. Ten images were used for the U-Net training . We call UNET2 the use of U-Net with two classes and weights from . The same model with label augmentation is referred as UNET3. DWM and SAW refer to training with U-Net network using the proposed and weights, respectively. We refer to FL as the focal loss work in 
which was applied in the segmentation of small objects using an adaptive weight map. We use its loss combined with label augmented ground-truth. All networks were equally initialized with the same normally distributed weights using the Xavier method. After training, binary segmentations are created using the pixel assignment algorithm described in section 2.3.
We adopted the F1 score to compare computed contours to ground truth. To allow small differences in the location of contours, an uncertainty radius , measured in pixels, is used for the F1 calculation, following . Table 1 compares the results from different methods for several radii. For all radii our proposed methods outperform the other approaches. Better contour adequacy is obtained mainly with SAW for in the training set. In the testing phase, however, higher generalization can be observed with SAW for all radii. DWM was ranked second best. We will perform further tests with UNET3 to increase separability but the accuracy we have achieved so far, see Fig.6, suggests improvements will not surpass DWM or SAW.
Plots of F1 score for different radii and fields of view are shown in Fig.6 for all methods. We have experiemted with image sizes 1024x1024, 900x900, 500x500, and 250x250 corresponding to 1X, 1.1X, 2X and 4X fields of view. Objects look smaller to the network when the image size is reduced compromising instance segmentation. FL performed poorly when the field of view is increased. In all cases the best performances were obtained using SAW and DWM.
|1X field of view||2X field of view|
. On the let panel, we show for all training epochs of the SAW network the class weighted accuracy (blue) and the weighted cross entropy (red). In right panel we show the accuracy during training for all tested models, with outperforming rates for our DWM and SAW.
To measure instance detection, every recognized cell with Jaccard Index greater than is counted as True Positive. Contrary to the Intersection over Union (IoU) metric for detection  which uses bounding boxes, the Jaccard Index calculates the instance adequacy from object segmentation. Precision, Recall and F1 are calculated as described by Ozdemir et al.. Table 2 shows the instance recognition metrics for all the approaches. The proposed methods outperform with high margin all the other methods when the number of recognized instances is taken into account. The SAW method showed an improvement of over DWM for the training set and an improvement of for the testing set. UNET2 behaved poorly in cluttered cells, unable to separate then. We speculate the combination of background and touching regions by UNET2 into a single class prevented the proper classification of pixels.
These encouraging results suggest that combining multiclass learning with pixel–wise shape aware weights might be advantageous to achieve improved segmentation results. We will perform further tests with UNET3 to increase separability but the accuracy we have achieved so far suggests minor improvements.
We proposed two new shape based weight maps which improved the effectiveness of the weighted cross entropy loss function in segmenting cluttered cells. We showed how learning with augmented labels for touching cels can benefit instance segmentation. Experiments demonstrated the superiority of the proposed approach when compared to other similar methods. In future work we will explore learning procedures that adapt weights in the critical contour regions and possibily improve results by training with more data.
-  Steven A Rosenberg and Nicholas P Restifo, “Adoptive cell transfer as personalized immunotherapy for human cancer,” Science, vol. 348, no. 6230, pp. 62–68, 2015.
-  Ellen V Rothenberg, Jonathan E Moore, and Mary A Yui, “Launching the T-cell-lineage developmental programme,” Nature Reviews Immunology, vol. 8, no. 1, pp. 9, 2008.
-  Haibo He and Edwardo A Garcia, “Learning from imbalanced data,” IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263–1284, 2009.
-  Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
-  Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in , 2015, pp. 3431–3440.
-  Arnaud Browet, Christophe De Vleeschouwer, Laurent Jacques, Navrita Mathiah, Bechara Saykali, and Isabelle Migeotte, “Cell segmentation with random ferns and graph-cuts,” in Image Processing (ICIP), 2016 IEEE International Conference on. IEEE, 2016, pp. 4145–4149.
-  Hao Chen, Xiaojuan Qi, Lequan Yu, and Pheng-Ann Heng, “DCAN: Deep contour-aware networks for accurate gland segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 2487–2496.
-  Korsuk Sirinukunwattana, Josien PW Pluim, Hao Chen, and othres, “Gland segmentation in colon histology images: The glas challenge contest,” Medical image analysis, vol. 35, pp. 489–502, 2017.
-  Yan Xu, Yang Li, Yipei Wang, Mingyuan Liu, Yubo Fan, Maode Lai, I Eric, and Chao Chang, “Gland instance segmentation using deep multichannel neural networks,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 12, pp. 2901–2912, 2017.
-  Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick, “Mask R-CNN,” in Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017, pp. 2980–2988.
-  Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99.
-  Xiao-Yun Zhou, Mali Shen, Celia Riga, Guang-Zhong Yang, and Su-Lin Lee, “Focal FCN: Towards biomedical small object segmentation with limited training data,” arXiv preprint arXiv:1711.01506, 2017.
Xavier Glorot and Yoshua Bengio,
“Understanding the difficulty of training deep feedforward neural
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010, pp. 249–256.
-  Francisco J Estrada and Allan D Jepson, “Benchmarking image segmentation algorithms,” International Journal of Computer Vision, vol. 85, no. 2, pp. 167–181, 2009.
-  Gabriela Csurka, Diane Larlus, Florent Perronnin, and F Meylan, “What is a good evaluation measure for semantic segmentation?,” IEEE PAMI, vol. 26, pp. 1, 2004.
-  Jan Hosang, Rodrigo Benenson, Piotr Dollár, and Bernt Schiele, “What makes for effective detection proposals?,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 4, pp. 814–830, 2016.
-  Bahadır Özdemir, Selim Aksoy, Sandra Eckert, Martino Pesaresi, and Daniele Ehrlich, “Performance measures for object detection evaluation,” Pattern Recognition Letters, vol. 31, no. 10, pp. 1128–1137, 2010.
-  Fidel A. Guerrero-Peña, Pedro D. Marrero Fernandez, Tsang Ing Ren, Mary Yui, Ellen Rothenberg, and Alexandre Cunha, “Multiclass weighted loss for instance segmentation of cluttered cells,” arXiv preprint, 2018.