1 Introduction / Overview
Embryo morphogenesis relies on coordinated cell movements and tissue reorganization to allow correct shaping. Progress in embryo culture and live imaging techniques has allowed direct observation of cellular rearrangements in embryos from various species, including those with internal development [1]. Important insight has been obtained through qualitative analysis of live imaging data, but quantitative automated analysis remains a bottleneck. The specific question addressed here is the cellular mechanisms of mesoderm migration during mouse embryo gastrulation [2]. To look at cell shape changes of the nascent mesoderm after ingression, we examine BrachyuryCre; mTomato/mGFP embryos between e6.75 and e7.5 by confocal microscopy. Cells expressing Brachyury that have gone through the streak and are populating the embryos as migrating mesoderm have green membranes, while the rest of the embryo has red membranes. To ensure optimal embryo survival, it is best to avoid multiple colors imaging [3], and we have thus favored a membrane marker. The goal is to track cell movements to build a map of cell trajectories depending on time and place of ingression.
As a preliminary step to cell movement analysis, our work focuses on cell detection and segmentation. The images collected using fluorescence microscopy exhibit many characteristics that make segmentation challenging. These include limited spatial resolution and contrast, resulting in poor membrane details. Specifically, as can be observed in Fig. 1.a, the fluorophores do not strictly concentrate along the cell membranes in our dataset [25]. In contrast to images studied in [6, 7, 8], this makes the border between two adjacent cells difficult to isolate, even visually. Moreover, the inner textures of distinct cells present quite similar statistics, making region merging strategies inappropriate as long as they do not use edge information. This is in contrast with natural images, in which the objects to segment are characterized by distinct inner textures and color, and can therefore be effectively segmented using superpixel merging techniques [9].
Fig. 1 illustrates two approaches that are widely employed for image segmentation. The graphbased method of Felzenszwalb and Huttenlocher merges the regions in a greedy manner, using a minimum spanningtree to measure the pixels uniformity in a region and compare it to border transitions [5, 10]
. The Mean Shift (MS) algorithm offers an alternative popular clustering framework. MS represents pixels in the joint spatialrange domain by concatenating their spatial coordinates and intensity values into a single vector. This method then assigns each pixel to a local maxima of the statistical distribution of the pixels in this domain, using a gradientdescent process
[4]. We observe in Fig. 1 that none of these approaches succeeds in segmenting adjacent cells. Hence, they are not able to capture the semantic knowledge required to distinguish individual cells within cells aggregate.Among the approaches proposed in the literature to address semantic segmentation problems, [11] and [12] have respectively considered an interactive framework or a prior discriminative description of the object to segment. In the context of microscopy, [13]
have defined such prior models based on templates, learned in a supervised manner. In superresolution localization microscopy and in MRI,
[14] and [15] respectively rely on density estimation or SVM texture features classification to differentiate structures of interest. Those approaches are however only relevant when strong appearance priors exist about how the object to segment differs from its environment. This is not the case in our dataset, where the shape of the cells is subject to significant variability, and where the environment of each cell is composed of quite similar other cell patterns.In cases where the object appearance is not discriminant, training appropriate edge detectors appears to be a natural approach [16, 17]. The work in [17]
is of particular interest. It has been proposed in the context of neurons reconstruction, using electron microscopy. It combines a pixellevel membrane probability estimator with a conventional watershed algorithm to segment regions that are likely to be closed by a membrane. In practice however, the membrane probability map presents too many local minima, which leads to an oversegmented partition. To address this problem, a socalled boundary classifier is trained to control the merging of adjacent regions, based on the statistics of boundary and region pixels. The main drawback of this approach is that the boundary classifier is trained directly on the output of the watershed stage, thereby requiring training adjustment when the watershed thresholds are tuned. Moreover, the contours defined in the first step, strictly based on the membrane detector, can only be removed in the second step, without being corrected based on the observed region pixel statistics.
To circumvent those limitations, we propose to adopt an approach that does not consider edge and inside pixels sequentially, but instead considers them jointly. In an initial stage, our approach learns how interior pixels differ from background or border pixels. It then adopts a global energy minimization framework to assign cellrepresentative labels to pixels, based on their posterior interior/border/exterior class probabilities. Considering explicitly a class of pixels lying on borders between adjacent cells is critical since the main problem encountered by previous works on our dataset consists in splitting cellular aggregates into individual cells (see Fig. 1). Formally, we use a semiNaive Bayesian approach to estimate, in each pixel, the probabilities that this pixel lies inside a cell, on a boundary between adjacent cells, and in the background. We have chosen semiNaive Bayesian estimation because it has been shown to be accurate and offer good robustness and generalization properties in many vision classification tasks [18, 19]. This last point is important since the manual definition of cell contour groundtruth is generally considered as a tedious task, which practically limits the number of available training samples. Regarding the subsequent energyminimization framework, we rely on the fast approximate minimization with label costs introduced by Delong et al. [20], based on the seminal work of Boykov et al. [21]. In final, our work appears to be an elegant and effective solution to exploit posterior interior/border/exterior probability maps in a segmentation context.
The rest of the paper is organized as follows. Section 2
introduces our semiNaive Bayesian probability vector estimator. Section
3 describes the energy minimization labelling framework. Section 4 validates our approach, and Section 5 provides some concluding comments.2 Pixel class probability estimation
This section explains how to assign interior/border/exterior class probabilities to a pixel, based on the observation of its neighborhood. Following many successful recent works [18, 22, 23], we use randomized sets of binary tests to characterize the different classes of point neighborhoods.
In practice, the point neighborhood is defined by a small square window of radius and of size centered around the pixel of interest. Each binary test compares the intensity of two pixels, and is set to 1 when the first is larger than the second, and to 0 otherwise. The pixel positions of each test are drawn uniformly at random within the square window. The approach considers sets of binary tests that are randomly selected, to define flat structures, named ferns.
As in [18], let
denote the random variable that represents the class of an image sample, and
be the set of interior/border/exterior classes. Given the ensemble of ferns , where denotes thefern, we are interested in estimating the posterior probabilities
. If we admit a uniform prior with for , Bayes’ formula yields:(1) 
Learning and handling the class conditional joint probability in (1) is not feasible for large products since it would require to compute and store entries for each class. To keep the conditional probabilities tractable while accounting for some binary tests dependencies, the seminaive Bayesian approach proposed in [18] assumes independence between the ferns, but accounts for dependencies between the binary tests belonging to the same fern. The joint conditional probability is approximated by:
(2) 
where the class conditional distribution of each fern is simply learned based on the accumulation of the training samples observations, as detailed in [18].
When the number of ferns is large, the product in (2) may cause computational underflow. Hence, in general, one defines the score
(3) 
The scores extracted with the random ferns provide an interesting insight about the class distribution of pixels within the image. In a conventional classification framework, a pixel class MAP estimate is defined by:
(4) 
In our segmentation problem, however, the MAP does not define accurately the cell boundaries, see Fig. 2.b. Therefore, we turn to a global energyminimization, build upon the ferns scores, to derive an appropriate segmentation. In what follows, when we refer to the ferns scores, we consider them normalized, i.e. , although we will abuse the notation for clarity.
3 Class compliant energy minimization
The global energy minimization framework introduced in [21, 20] is used to assign cellrepresentative labels to pixels, based on their posterior interior/border/exterior class probabilities. Given a set of labels , we are looking for a pixeltolabel assignment that minimizes the energy
(5) 
where is the Kronecker delta, stands for the set of pixels and for a set of pairs of interacting pixels. As detailed below, the first term, , is called data fidelity and measures the cost to associated each pixel to its label . The second term, , regularizes the label assignment by penalizing the assignment of distinct labels to interacting pixels and in a graph structure . Finally, as detailed in [20], the last term, , introduces a cost when assigns the label to at least one pixel. The graph structure penalizes local inconsistencies of the labels while the label cost penalizes having to many different labels globally.
We initialize the label set so that each cell is represented by at least one label. To do so, we extract a number of cellrepresentative seeds. In practice, each seed corresponds to the center of a connected set of pixels whose interior score lies above a threshold. To circumvent the threshold selection issue, and to adapt the seed definition to the local image contrast, we consider a decreasing sequence of thresholds. Large thresholds result in small segments, that progressively grow and merge as the threshold decreases. Among those segments, we only keep the largest ones whose size remains (significantly) smaller than the expected cell size. This might result in multiple seeds per cell, as depicted by red dots in Fig. 2.b and e. A unique label is then attached to each seed, adding one virtual label for the background. The fact that a single cell induces multiple seeds, and thus multiple labels, is not dramatic since the subsequent energyminimization tends to filter redundant labels.
To obtain a label assignment that is compliant with the class probabilities obtained in Section 2, we define the cost functions in (5) upon the ferns scores:
the data fidelity of assigning a pixel to a seed label builds on two complementary signals because we want the cost to increase largely when the path from a pixel to a seed crosses a cell border, whether this border is between the cell and the background or between two cells. Hence,
(6) 
where , and correspond to the fern scores for the interior, the boundary and the exterior classes respectively, and is the set of pixels along the line connecting pixel to the seed associated to [24]. The operator is used to penalize the allocation of to only when crosses a border, i.e. or . As depicted in Fig. 2.c and d, the signal indeed peaks for borders between the cells and the background while the signal peaks for borders between two cells.
the data fidelity of assigning a pixel to the background is the minimal exterior score integral computed over the set of lines , each line originating in , and having a length . Hence,
(7) 
the graph edge weight
between interacting pixels, that are adjacent pixels on an 8neighborhood connectivity, is computed using a sigmoid function as
where is defined by
Doing so, the edge weight is low (high), allowing (discouraging) neighboring pixels to have different labels, when the probability of having a boundary at pixels or is high (low). Values for and are not critical and chosen empirically.
Minimizing (5) is NPhard. We compute an approximate solution efficiently using graphcuts, with expansions, as described in [20]. This energy minimization framework is particularly well suited to our problem because it may account for multiple seeds spanning the same cell, as opposed to classical watershed approaches [17].
4 Experimental results
We validate our segmentation framework on a sequence of images with manually annotated ground truth, publicly released [25]^{1}^{1}1To favor reproducible research, our code will also be made publicly available at camera ready submission.
To define our training set based on the manually annotated cell contours (white dashed contours in Fig. 2.a), we rely on morphological operations. Specifically, the interior class is set with binary erosion while the exterior class is set with binary dilation. The boundary class is composed of pixels lying on the exterior region of at least different cells.
Since the number of annotated cells is limited and because we enforce balanced classes for training, our training set is restricted to pixels for each class. To increase the training set diversity and become invariant to rotation, we train the ferns on square windows that sample the image according to different orientations. Each fern involves 10 tests, and we use 200 ferns. To measure the overall performance, we have run a fold crossvalidation, and have measured a classification accuracy of , with standard deviation.
We have then tested our energy minimization framework on all the images available in the dataset^{2}^{2}2see http://perso.uclouvain.be/arnaud.browet/bioseg/results.html for additional results.^{3}^{3}3To avoid overfitting, each image has been segmented based on ferns trained exclusively from other images annotations.. The parameters have been empirically selected as follows, , and . Fig. 2 presents some representative examples of segmentation, together with some insightful intermediate metrics.
Fig. 2.b depicts the segmentation resulting from the ferns only, using an argmax decision defined in (4).
Fig. 2.c and d present the line integrals considered in equation (6). Note that the integral values are only provided in pixels that lies within a 50 pixels distance from a seed. This explains the particular landscape of Fig. 2.c and d. We observe that both metrics provide complementary information, delineating the cells either from the background or from an adjacent cells.
The last column in Fig. 2 presents the segmentation resulting from our proposed fernsbased energy minimization. We observe that the regions extracted are in very good agreement with the ground truth. As depicted in the row of Fig. 2, our segmentation is able to accurately localize boundaries between touching cells. Moreover, our method is also able to merge multiple seeds within a unique region or to reject seeds situated in the background, as displayed in the row of Fig. 2.
5 conclusion
Our work has adopted an energyminimization framework to segment cell images according to the cues provided by random ferns about the probability that each pixel is located within a cell or not.
Our framework is highly versatile, since the classes definition and the energy terms can account for any prior knowledge related to the problem at hand. Additionally, it is also interactivefriendly, in the sense that the seeds definition can easily be manually adjusted, if needed.
References
 [1] S. Nowotschin and A.K. Hadjantonakis, “Live imaging mouse embryonic development: Seeing is believing and revealing,” Mouse Molecular Embryology, vol. 1092, October 2013.
 [2] S. J. Arnold and E. J. Robertson, “Making a commitment: cell lineage allocation and axis patterning in the early mouse embryo,” Nature Reviews Molecular Cell Biology, vol. 10, no. 2, February 2009.
 [3] X. Lou, M. Kang, and P. Xenopoulos, et al., “A rapid and efficient 2d/3d nuclear segmentation method for analysis of early mouse embryo and stem cell image data,” Stem Cell Reports, vol. 2, no. 3, January 2014.
 [4] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, May 2002.

[5]
P.F. Felzenszwalb and D.P. Huttenlocher,
“Efficient graphbased image segmentation,”
Int. Journal of Computer Vision
, vol. 59, no. 2, May 2004.  [6] Romain Fernandez, Pradeep Das, Vincent Mirabet, and Eric Moscardi, et al., “Imaging plant growth in 4d: robust tissue reconstruction and lineaging at cell resolution,” Nature Methods, vol. 7, no. 7, July 2010.
 [7] Zia Khan, YuChiun Wang, Eric F. Wieschaus, and Matthias Kaschube, “Quantitative 4d analyses of epithelial folding during drosophila gastrulation,” Development, vol. 141, no. 14, 2014.
 [8] K. R. Mosaliganti, R. R. Noche, F. Xiong, I. A. Swinburne, and S. G. Megason, “Acme: Automated cell morphology extractor for comprehensive reconstruction of cell membranes,” PLoS Computational Biology, vol. 8, no. 12, December 2012.
 [9] P. Arbelaez, M. Maire, C. Fowlkes, and Malik J., “Contour detection and hierarchical image segmentation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, May 2011.
 [10] C. Couprie, C. Farabet, Y. LeCun, and L. Najman, “Causal graphbased video segmentation,” in IEEE Int. Conf. on Image Processing, Sept 2013.
 [11] Dingding Liu, Yingen Xiong, L. Shapiro, and K. Pulli, “Robust interactive image segmentation with automatic boundary refinement,” in IEEE Int. Conf. on Image Processing, Oct. 2010.
 [12] H. Zhang and S.A. Goldman, “Image segmentation using salient pointsbased object templates,” in IEEE Int. Conf. on Image Processing, Oct. 2006.

[13]
C. Chen, W. Wang, J.A. Ozolek, and G.K Rohde,
“A flexible and robust approach for segmenting cell nuclei from 2d microscopy images using supervised learning and template matching,”
Cytometry A, vol. 85, no. 5, 2013.  [14] K.C.J. Chen, Ge Yang, and J. Kovacevic, “Spatial density estimation based segmentation of superresolution localization microscopy images,” in IEEE Int. Conf. on Image Processing, Oct. 2014.

[15]
P.K. Roy, A. Bhuiyan, and K. Ramamohanarao,
“Automated segmentation of multiple sclerosis lesion in intensity enhanced flair mri using texture features and support vector machine,”
in IEEE Int. Conf. on Image Processing, Oct. 2013.  [16] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce, “Discriminative sparse image models for classspecific edge detection and image interpretation,” in Eur. Conf. on Computer Vision, Oct. 2008.
 [17] Ting Liu, M. Seyedhosseini, M. Ellisman, and T. Tasdizen, “Watershed merge forest classification for electron microscopy image stack segmentation,” in IEEE Int. Conf. on Image Processing, Oct. 2013.
 [18] M. Ozuysal, M. Calonder, V. Lepetit, and P. Fua, “Fast keypoint recognition using random ferns,” IEEE Trans. on Pattern Analysis and Machine Intelligence,, vol. 32, no. 3, March 2010.
 [19] P. Parisot, B. Sevilmis, and C. De Vleeschouwer, “Training with corrupted labels to reinforce a probably correct teamsport player detector,” in Int. Conf. on Advanced Concepts for Intelligent Vision Systems, 2013.
 [20] A. Delong, A. Osokin, H. N. Isack, and Y. Boykov, “Fast approximate energy minimization with label costs,” Int. Journal of Computer Vision, vol. 96, no. 1, 2011.
 [21] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 11, Nov 2001.

[22]
A. Bosch, A. Zisserman, and X. Munoz,
“Image classification using random forests and ferns,”
in IEEE Int. Conf. on Computer Vision, Oct 2007.  [23] P. Geurts, D. Ernst, and L. Wehenkel, “Extremely Randomized Trees,” Machine Learning, vol. 63, no. 1, 2006.
 [24] J.E. Bresenham, “Algorithm for computer control of a digital plotter,” IBM Systems Journal, vol. 4, no. 1, 1965.
 [25] Dataset available under the “Data/Software” tab at, “http://sites.uclouvain.be/ispgroup,” .
Comments
There are no comments yet.