Efficient 2D neuron boundary segmentation with local topological constraints

02/03/2020
by   Thanuja D. Ambegoda, et al.
ETH Zurich
18

We present a method for segmenting neuron membranes in 2D electron microscopy imagery. This segmentation task has been a bottleneck to reconstruction efforts of the brain's synaptic circuits. One common problem is the misclassification of blurry membrane fragments as cell interior, which leads to merging of two adjacent neuron sections into one via the blurry membrane region. Human annotators can easily avoid such errors by implicitly performing gap completion, taking into account the continuity of membranes. Drawing inspiration from these human strategies, we formulate the segmentation task as an edge labeling problem on a graph with local topological constraints. We derive an integer linear program (ILP) that enforces membrane continuity, i.e. the absence of gaps. The cost function of the ILP is the pixel-wise deviation of the segmentation from a priori membrane probabilities derived from the data. Based on membrane probability maps obtained using random forest classifiers and convolutional neural networks, our method improves the neuron boundary segmentation accuracy compared to a variety of standard segmentation approaches. Our method successfully performs gap completion and leads to fewer topological errors. The method could potentially also be incorporated into other image segmentation pipelines with known topological constraints.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 6

page 9

page 11

05/12/2022

Image Segmentation with Topological Priors

Solving segmentation tasks with topological priors proved to make fewer ...
05/20/2020

Biologically-Constrained Graphs for Global Connectomics Reconstruction

Most current state-of-the-art connectome reconstruction pipelines have t...
06/01/2021

3D WaveUNet: 3D Wavelet Integrated Encoder-Decoder Network for Neuron Segmentation

3D neuron segmentation is a key step for the neuron digital reconstructi...
06/04/2021

Hidden Markov Modeling for Maximum Likelihood Neuron Reconstruction

Recent advances in brain clearing and imaging have made it possible to i...
12/03/2021

Bridging the Gap: Point Clouds for Merging Neurons in Connectomics

In the field of Connectomics, a primary problem is that of 3D neuron seg...
02/18/2011

A linear framework for region-based image segmentation and inpainting involving curvature penalization

We present the first method to handle curvature regularity in region-bas...
05/22/2019

End-to-End Learned Random Walker for Seeded Image Segmentation

We present an end-to-end learned algorithm for seeded segmentation. Our ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Overview of our approach: LABEL:sub@subfig:raw_01 Raw EM image to be segmented. LABEL:sub@subfig:membraneRFC Neuron membrane probability map generated using a random forest classifier (white corresponds to low probability and black to high probability). LABEL:sub@subfig:ofr Max response from oriented edge filtering. The hue values of each pixel encode the orientation of the edge filter with the largest response. LABEL:sub@subfig:ws Graph of possible boundaries extracted using a watershed transform. LABEL:sub@subfig:membraneOut Membrane segmentation from our method. Note the gaps in the membranes in the probability map shown in LABEL:sub@subfig:membraneRFC are fixed in the segmentation. LABEL:sub@subfig:segmentation 2D neuron slice labels obtained from our method.

Electron microscopy (EM) is a widely used technique for acquiring high-resolution imagery of neuronal tissue to visualize neuronal structures such as chemical synapses and vesicles (Fig. 

(a)a). To carry out research in neuro-anatomy, large quantities of EM images are acquired and then annotated manually [briggman2006, cardona2010]. Since manual annotation is a tedious process, there is a need for automatic and semi-automatic methods [fua2015, jain2010, kasthuri2015].

The most common problems in automatic neuron segmentation are caused by local ambiguities at neuron membranes when the algorithms erroneously merge two adjacent neurons into one. Such ambiguities are a result of low SNR in some parts of the image either because of imperfections in the image acquisition process or lack of sharpness of neuron membranes. Human annotators are much better at resolving such problems due to their ability of considering a larger context when dealing with local ambiguities.

Therefore, one way for automatic segmentation methods to reach human level segmentation accuracy would be to take into account a larger context considering shape cues and continuation of structures in the presence of gaps. In this paper, we present a method to automatically annotate neuron slices (2D) on individual EM sections by accurately segmenting the neuron boundary that surrounds each neuron slice (Fig. (f)f).

Generic classifiers can be used to classify pixels of EM images into classes such as neuron membrane, synapses and mitochondria. Pixelwise probability maps (Fig. (b)b) thus obtained are commonly used as the main input in many neuron segmentation approaches [andres2012, fua2015, funke2012, kasthuri2015, kaynig2015, gala2013, amelio2011].There has been a significant improvement in pixel level image classification due to the recent progress in the area of deep neural networks, in particular with convolutional neural networks (CNN) [ciresan2012, tschopp2016]. The main drawback of CNNs is that they require a large training dataset which requires a significant amount of effort. When using CNNs, slight variations in the imaging parameters in datasets requires the CNNs to be trained separately for each dataset. Compared to CNNs, random forest classifiers (RFCs) require much less training data. However the quality of the probability maps generated by RFCs is usually less than those from CNNs. Our method produces significantly more accurate neuron segmentations using noisy probability maps from RFCs, compared to segmenting the same probability maps using standard techniques like the graph cut [boykov2001], thereby providing a way of generating reasonably accurate segmentations using a small amount of training labels.

The automatic segmentation method we propose (Fig. 1) uses topological constraints that reflect expected properties of an accurate segmentation of 2D neuron slices on an EM section. These constraints are defined on edges, nodes and faces of a planar graph which is derived from a membrane probability map. The segmentation problem is formulated as an integer linear program (ILP) that assigns an active or inactive state to each of the binary state variables (edges, nodes and faces of the graph) under these constraints. In addition to the constraints, we define an objective function that uses prior information about the state of these variables which is obtained by means of state of the art classifiers. Sec. 3 describes how we derive a graph of over-segmented object boundaries from a membrane probability map. Sec. 4 to Sec. 6 presents the formulation of our segmentation method as an ILP. Sec. 7 provides an evaluation of our approach with comparisons to other methods.

2 Related work

With the increase of automated and efficient EM image acquisition methods, there has been many advances in automated 3D neural circuit reconstruction methods in recent years [funke2012, amelio2011, kaynig2015]

. Most of these methods commonly use pixelwise neuron membrane probabilities estimated using state of the art classifiers.

The method presented in this paper focuses on segmenting neurons on 2D images which is potentially useful in 3D reconstruction pipelines where the accuracy of 3D reconstructions depend on the quality of 2D candidate segmentations [funke2012, funke2014]. Other methods that can be used to generate 2D segments include the graph cut and region merging approaches [gala2013, jain2011, andres2012]. Among the region merging approaches [gala2013, andres2012] work in both 2D and 3D.

Our approach differs from the above methods because we use local constraints that focus on lowering topological errors which are usually caused by lower SNR, thereby addressing problems that are typically difficult for automatic methods.

3 Graphical representation of the segmentation problem

(a)
(b)
(c)
Figure 2: Illustration of the graph on which our optimization problem is defined: LABEL:sub@subfig:zoomedWSgraph Zoomed in version of Fig. (d)d illustrating the extracted planar graph representing possible object boundaries (red) that delineate cell interior and neuron membranes. This graph is overlaid on an EM image. LABEL:sub@subfig:graphSketch Binary state variable types used in Sec. 4: edges (e), nodes (n) and regions (r) to define an integer linear program are annotated on a zoomed in version of LABEL:sub@subfig:zoomedWSgraph. Note that each object boundary in Fig. LABEL:sub@subfig:zoomedWSgraph (shown in red) between any two nodes and is denoted by a pair of edges having opposite directions (green). LABEL:sub@subfig:ws_ilpState Expected outcome of our segmentation task. Each region which is part of cell interior is assigned and membrane is assigned . Each active (directed) edge separating cell interior and membrane is highlighted in yellow. All inactive edges remain red.

Our method essentially represents the neuron segmentation task as an edge labeling problem on a planar graph (Fig. (d)d), where the edges of the graph correspond to potential object boundaries which delineate cell interior and membrane segments. The edge labeling procedure results in assigning a state to each of those edges. The assigned state would either be active or inactive, such that each active edge will be considered a true object boundary i.e. the two faces of the faces of the graph on either side of that edge will be assigned to opposite classes one of which is foreground (cell interior) and the other background (membrane).

Following sub-sections describe how we represent a membrane probability map of an EM image as a planar graph (Fig. (a)a) which is then used to formulate an ILP. The optimal solution of the ILP states which edges (and regions) are active as shown in Fig. (c)c.

3.1 Probability maps

As the first step, we obtain a membrane probability map of neuron membranes corresponding to the input EM image (Fig. (b)b). Each pixel of such a probability map gives the probability of that pixel belonging to the class of neuron membrane. These probability maps can be generated using any generic classifier of choice (e.g. CNN, RFC).

3.2 Graph of over-segmented object boundaries

In the second step potential object boundaries are extracted from probability maps. We use a bank of oriented edge detection filters (Fig. (a)a) to detect possible object boundaries along with their orientation. The output of such a filter bank applied to a membrane probability map quantifies how likely it is for an edge to be present at each pixel for each of the orientations defined in the edge filter bank. The filters have orientations ranging from to in steps of . The maximal response from these filters at each pixel is illustrated in Fig. (c)c.

A watershed transform is applied on the filter output to obtain a height map of the local maxima of the max response of the filter. The ridges of the watershed transform corresponds to a graph of containing all possible edges detected by the oriented filter bank (Fig. (d)d). Therefore, this graph can be considered as an over-segmentation of object boundaries that we would proceed to segment. To avoid having too many small watershed super-pixels (a.k.a. fragments), the filter response is smoothened using a Gaussian filter with pixels, before applying the watershed transform.

4 Problem representation using binary state variables

Using the graphical representation in Fig. (a)a we formulate an optimization task that results in the desired labeling of edges and regions where the regions which are part of cell interior are assigned to foreground and the regions which are part of membrane are assigned to background as illustrated in Fig. (c)c. This section describes how we formulate an ILP using the graph of over-segmented object boundaries such that the optimal solution corresponds to an accurate segmentation of neuron boundaries.

We define three types of binary state variables corresponding to the edges, nodes and faces of that graph as shown in Fig. (b)b. The active state of a binary state variable means that it has the value one.

(i) Edge state binary variables

Each edge shown in the graph (Fig. (a)a) between two nodes and colored in red has three corresponding state variables: (a) : directed edge exists from node to , (b) : directed edge exists in the opposite direction, (c) : lack of an edge between nodes and . If the edge between the nodes d is part of the object boundary, either or is set to be active by the ILP (Fig. (c)c).

(ii) Region state binary variables Each face (region) of the graph has two corresponding state variables and . when region is part of foreground in the segmentation output. is active when region is part of background.

(iii) Node state binary variables In an accurate segmentation output, the angles between two edges (neuron boundaries) tend to be smooth. Therefore, sharp angles occurring between two active edges are penalized in the ILP objective. Assigning such a penalty to the activation of a pair of edges gives rise to a quadratic program. In order to keep the problem formulation linear we define node state variables which correspond to pairs of directed edges. At any node of the graph (Fig. (a)a) there are at least 3 edges (6 directed edges as in Fig. (b)b) out of which exactly two or zero has to be active as described later in Sec. 5.

A set of node state variables are defined for each node. When a particular node state is active, it specifies which pair of directed edges are active out of all edges connected to this node.

is the state where the node is not active i.e. not part of the segmentation and therefore none of the edges connected to it are active. Fig. 3 illustrates the set of node state variables for a typical node of the graph with 3 neighbors.

5 ILP constraints

The constraints of an ILP are a set of bounds for different linear combinations of the state variables that define the solution space of the entire set of state variables.

We define a set of hard constraints that reflect general properties expected in an accurate segmentation output. Following is a description of these linear constraints used in our ILP.

5.1 Low-level constraints

We define an activation constraint for each variable type as follows.

(i) Directed edge activation Between two adjacent nodes and , we allow either the outgoing edge (w.r.t ) to be active or the incoming edge to be active or none of them to be active (). Therefore, each edge of the graph in Fig. (a)a has three corresponding binary edge states, out of which exactly one has to be set to . The mathematical formulation of this constraint is as follows:

(1)

where is the set of all edges in the graph.

(ii) Node activation

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 3: Node states: LABEL:sub@subfig:node depicts a typical node (in red) in a graph similar to Fig. (b)b that is connected to each of its three neighboring nodes (blue) via a pair of directed edges. Fig.LABEL:sub@subfig:node_0 to LABEL:sub@subfig:node_6 are all possible node states for the node in red. Each of these states are represented by a binary variable in the ILP formulation. LABEL:sub@subfig:node_0 shows the inactive node state where it is not connected to any of its neighbors. LABEL:sub@subfig:node_1 to LABEL:sub@subfig:node_6 show all possible active states this node is allowed to have according to the given constraints. In each of these active node states there are exactly two active edges attached to it, where one is incoming and the other out going. Therefore, (where is the number of directly connected nodes to the current node).

Similarly, for each node we enforce that one of all possible nodes states () must be active.

(2)

where is the set of all nodes and is the number of binary node states of node as illustrated in Fig. 3.

(iii) Region activation Each face of the graph (region) is assigned two binary indicator variables: and . When region is assigned to foreground, the corresponding region binary state variables should have the values: and . Therefore, region state variables are activated according to the following constraint:

(3)

where is the set of all regions.

5.2 Topological constraints

Co-activation of different combinations of state variables results in different topologies in the output segmentation.

If not properly constrained, the resulting topology could deviate from what’s expected in an accurate segmentation. Therefore, we define the following constraints in order to ensure that resulting segmentation have an acceptable topology.

(i) Closed loop of edges

(a)
(b)
(c)
Figure 4: LABEL:sub@subfig:filterElement Illustration of the oriented edge detection filter element used in Sec. 3.2. Cell interior (foreground) is always on the right hand side of the directed edge. LABEL:sub@subfig:danglingEdge A dangling edge would mean that the regions A and B on either side of the edge are of the same type. This is a topological error because an edge has to have two opposite types of regions on its sides (foreground and background). We prevent this using the constraint closed loop of edges (Eq. 4). LABEL:sub@subfig:clockwiseCycle Because of our definition of a directed edge, any foreground segment has to be bounded by a closed clockwise loop of active edges. We enforce this topological property using constraint clockwise edge loop around foreground (Eq. 6).

In the segmentation output each foreground segment must be enclosed by a closed loop of edges. This also means there must not be any dangling edges activated, that do not have exactly two other edges connected to either end of it (Fig. (b)b). We enforce this topological property as a constraint as follows:

  • [noitemsep]

  • When a node is active exactly two edges connected to it have to be active as well.

  • When the node is inactive (), none of the edges attached to it can be active.

This results in the following mathematical formulation:

(4)

where is the set of edges attached to node . Any neighbor of connected via the edges and is denoted by .

(ii) Activate corresponding edges of a node state For each active state of node , there is a unique pair of directed edges that have to be activated. While constraint (4) ensures there are exactly two active edges for an active node, this constraint (Eq. (5)) ensures the matching between the predefined node states and the corresponding active edge pairs. This matching is important because in the objective function (Sec. 6) we use a precalculated set of costs for each node state based on the angle between its active pair of edges.

Consider a typical node with (at least) a pair of neighboring nodes and . Let node state have the edges (outgoing edge) and (incoming edge) activated.

We formalize this constraint as follows:

(5)

where is the set of all nodes and is the set of all node states of node .

(iii) Clockwise edge loop around foreground As mentioned previously, topological constraint (i) given by Eq. (4) enforces a closed cycle of edges around every foreground segment. However according to the definition of a directed edge (Fig. (a)a) foreground must occur to the right of an active edge. Therefore, these closed cycle of edges should have a clockwise sense (Fig. (c)c).

This topological constraint is formulated in the ILP as follows:

(6)

(iv) Membranes as closed loops We do not expect to see gaps in the segmented neuron membranes. In other words, as shown in Fig. (c)c, the segments which are assigned to be membrane (background) are contiguous. Any region which is assigned to background must have at least two other adjacent regions connected to it that are also assigned to background. This property is formalized as a constraint using the following equation:

(7)

Here, is an element of the set of all edges bounding region and when edge is not active. In Eq. (7) making sure that at least 2 of the edges are inactive when the region is turned off (i.e. assigned to background), we enforce that region to be part of a contiguous membrane segment.

We observe that this constraint helps the system to fill gaps observed in membrane probability maps (Fig. (b)b). This is useful since neuron membranes sometimes gets blurred out and leads to merging of two adjacent neuron sections via this blurry region due to that region being misclassified as being part of cell interior.

6 ILP objective function

The purpose of the objective function of our ILP is to minimize the deviation of the optimal solution from prior information derived from the data. We precompute coefficients for each binary state variable using the input data, using classifiers that are trained on available ground truth. The objective function of the ILP is:

(8)

Here, the terms , and represent the weighted cost terms for the activation of the state variables corresponding to edges, nodes and regions in the model respectively. The objective term for edge state variables is defined as:

(9)

where and are the linear weights controlling the relative reward (or penalization) of turning an edge on (or off). is the a priori probability for edge to be active i.e. be part of an object boundary. This probability can be learned using any general classifier of choice. Fig. (a)a illustrates a priori edge activation probabilities obtained using a random forest classifier.

The cost term corresponding to all node state variables is:

(10)

where is the set of all node states of node .

Figure 5: Illustration of node angle used in Equation 11. values are obtained from oriented edge filters as described in Sec. 3.2. The horizontal dotted line is the reference with respect to which the angles are defined for the oriented edge filters.

In Eq. (10) we have set the score for any inactive node configuration to be equal to so that in the ILP objective its contribution is solely determined by the weight . The score assigned to each active node configuration of node is calculated as

(11)

where is the angle between the two active edges corresponding to the active node configuration of node (Fig. 5). This score is a smoothness term which is maximal for a node configuration where the incoming and outgoing edges are at an angle and drops to zero when it deviates from . The variation of the smoothness is modeled using a Gaussian function with mean and a predefined spread () as given in equation 11.

The objective term for all regions is:

(12)

where is the a priori probability of the region being part of cell interior. This value is obtained by averaging the membrane probability of all the pixels in a given region (Fig. (b)b).

(a)
(b)
Figure 6: Visualization of a priori probabilities associated with edge state variables and region state variables of an EM image, which are used in the ILP objective (Equation. 8). LABEL:sub@subfig:edgeScores Each pixel belonging to an edge indicates the a priori probability of that edge being at the boundary between neuron membrane and cell interior. These probabilities are generated using a random forest classifier. White indicates a high probability. LABEL:sub@subfig:regionScores Each region is assigned an a priori probability that it belongs to cell interior. This probability is obtained from the membrane probability map by averaging the pixelwise probabilities over each region.

We use the structured learning framework suggested in [smola2007, teo2010] to learn the optimal parameters , , , , and of the linear objective function. Just one image of size pixels with ground truth labeling was sufficient for parameter learning.

7 Results

Figure 7: Total topological error quantified using TED plotted against the boundary tolerance allowed in pixels. which is the sum of false positives, false negatives, false splits and false merges. normalized by the number of segments in ground truth. Segmentation using our approach with CNN probability maps as inputs (CNN:STC) shows the best accuracy.
(a)
(b)
Figure 8: Comparison of segmentation errors using Variation of Information (VoI) and Rand Error (). Methods compared: Graphcut (GC) [boykov2001]

, Graph-based active learning of agglomeration (GALA) 

[gala2013] and our method (STC). LABEL:sub@subfig:rfc_re_voi Using RFC probability maps as inputs. LABEL:sub@subfig:cnn_re_voi Using CNN probability maps as inputs.

Evaluation criteria

We use Rand Index (RI) [rand1971], Variation of Information (VoI) [meila2005] and Tolerant Edit Distance (TED) [funke2016] to evaluate our approach against other image segmentation methods which are also based on clustering pixels using probability maps. Both RI and VoI are known to be affected by boundary shifts of the segmentations, even though such deviations will not affect the topological form of the segmentation that we are interested in [jain2010, funke2016]. TED tries to avoid this problem by counting the minimum topological errors within a specified boundary shift of segments. The topological errors evaluated are: false splits, false merges, false positives (foreground segment identified where there should be none), false negatives (missed segment).

Experiments

(a)
(b)
Figure 9: Breakdown of topological segmentation errors quantified using Tolerant Edit Distance (TED) [funke2016] comparing Graphcut (GC) [boykov2001], Graph-based active learning of agglomeration (GALA) [gala2013] and our method (STC). LABEL:sub@subfig:rfc_fmfs False merges and false splits using RFC inputs. LABEL:sub@subfig:cnn_fmfs False merges and false splits using CNN inputs.

We evaluate our method using a publicly available dataset[cardona2010] of neural tissue of Drosophila larva, along with manually annotated neuron membrane labels. This dataset contains 30 sections of serial section Transmission Electron Microscopy (ssTEM) images of size pixels. Each pixel is of dimensions . Each section is approximately thick. We set aside images to for evaluations.

Probability maps We used the random forest classifier provided by Ilastik segmentation toolkit [ilastik2011]. The classifier was trained interactively by placing a few brush strokes to mark neuron membrane and cell interior separately on the first images of the dataset. Another set of probability maps were generated using the CNN implementation provided by [tschopp2016], using the first images of the same dataset for training.

Comparisons We compare the performance of our method with the graphcut (GC) [boykov2001] and Graph-based active learning of agglomeration (GALA) [gala2013] using probability maps generated as mentioned above. We have a lower VoI than both GC and GALA. Furthermore, our approach results in a lower cumulative topological error than GC and GALA (Fig. 7).

8 Discussion

We have proposed a method to segment neuron membranes in EM images which results in fewer topological errors. Our method takes pixelwise membrane probability maps as inputs which are then used to represent the segmentation task as an edge labeling problem on a graph. The edge labeling is solved by formulating an ILP using topological constraints to improve segmentation accuracy.

We have shown that our method can be used to produce 2D neuron segmentations using probability maps from both CNNs and RFCs with less topological errors than other segmentation methods.

State of the art classifiers based on deep learning such as CNNs that produce high quality probability maps for EM datasets need a lot of training labels for each dataset, which is a major bottleneck for automated segmentation methods. On the other hand, random forest classifiers require much less training data with the drawback of resulting in probability maps of lower quality.

Due to the generic nature of the topological constraints used in our approach, we note that this method can be potentially used in any image segmentation pipeline that have similar properties.

Acknowledgment

This research was funded by the National Competence Center for Biomedical Imaging (NCCBI) of Switzerland. The authors would like to thank Julien Martel, Jan Funke and Richard Hahnloser for their ideas and feedback.

References