1 Introduction
Recent advancement in largescale electron microscopy (EM) allows generation of terabyte and petabyte of serial images of brain tissue at nanometer resolution [7, 18]
. Machine learning methods have made automated 3D reconstruction possible for individual neurons
[6] and intracellular organelles such as mitochondria [2]. Intriguingly, 3D shapes of these objects resolved at EM level are far more complicated than the classic depiction in the textbook. Thus, novel morphology analysis tools are demanded to further our understanding of the basic properties of neuronal compartments (Fig. 1).Nevertheless, there are three challenges. First, the branches and loops of a nonconvex object can tangle together in the 3D meshes, which makes it difficult for an intuitive perception of the underlying topology. Second, there lacks an intuitive way to convey the shape information of neurons and organelles in the neuroscience community. Third, traditional descriptors of 3D meshes are designed to compare objects with similar scales, which is not suitable for the application on neurons and organelles that have a wide range of spatial size.
To tackle these challenges, we propose a topological nomenclature system to abstract, categorize, and manipulate the 3D meshes of neurons and organelles. We first skeletonize them into vertexes and edges to untangle the objects, which are further pruned into a concise reduced graph while preserving topological properties. To systematically name those objects, we propose a nomenclature system borrowing ideas from nomenclature for organic compounds. The primary aim of nomenclature in chemistry is to ensure that every name refers to a specific compound without ambiguity. The naming systems, including InChI [5] and SMILES, displays more structural details but is more cumbersome for scientific communication. Therefore in this work, we follow the IUPAC rule [14]
to generate the graph name that is more humanreadable. To apply the nomenclature system to shape analysis, we use the deep learning model for selfsupervised learning on graph
[9]. In comparison with traditional shape descriptors like heat kernel signature (HKS) [12], one key difference from our approach is that our graph representations are more intuitive to understand than the HKS and allow for a simple shape decomposition into primitives. The idea of the skeleton based 3D shape matching has been explored by Sundar et al. in [17]who introduce the concept of a topological signature vector  a low dimensional representation of a graph which can be used by similarity measures. One difference from our approach is that their scheme generates an acyclic skeletal graph which does not capture cycles or multiple paths between two vertices.
To summarize, we present three main contributions in this paper. First, we propose a shape abstraction method that converts 3D meshes into 2D graphs to improve morphological perception. Second, by using the nomenclature system, we not only make it more interpretable for neuroscientists but also further compress the information needed for graph reconstruction. Third, we implement an unsupervised model to embed these graphs into vector space for 3D shape retrieval and decomposition.
2 Method
Given an input 3D mesh, we first transform it into a reduced graph which preserves its topological information, then determines its nomenclature name based on its object type (e.g., mitochondria), and lastly compute its feature in the nomenclature embedding space for later manipulation (Fig. 2).
2.1 TopologyAware Reduced Graph Generation
Starting from the voxel representation of a 3D mesh, we convert it into a reduced graph which preserves its topological structure, like the molecular graph [10].
Graph Initialization: We use an offtheshelf skeletonization algorithm proposed in Kálmán [13] to extract a 3D skeleton from the voxel representation. We can view the extracted 3D skeleton as a weighted undirected graph where is the set of coordinates of skeleton nodes in the 3D voxel grid, is the set of edges, and is the set of edge weights.
Graph Reduction: Based on the degree of incident edges, we can divide the skeleton vertices into junctions and endpoints . We aim to reduce the skeleton graph to a graph whose set of vertices is (referred to as the key nodes), and which preserves topological features of such as paths and distances (along with the 3D skeleton) between any pair of key nodes. Further, we also require to preserve any cycles present in the skeleton graph and to preserve multiple paths between any two key nodes.
For graph reduction, we modify the BreadthFirstSearch traversal algorithm, as outlined in Algorithm 1 in the supplementary material. At each step of the traversal, we only enqueue key nodes to our traversal queue. We initialize the queue with any key node, and while visiting a key node , we only enqueue (1) any key nodes which are adjacent to , and (2) any other key nodes which are connected to by a path in comprising only of nonkey nodes (referred to as a simple path). We define the “thickness” of an edge as the average of distance transforms of its two vertices, and the “thickness” of a path can be calculated as the mean of the thickness of each edge on the path weighted by edge lengths. During traversal, we keep track of two metrics for every pair of key nodes connected by a simple path: (1) sum of lengths of all edges in on that simple path, and (2) mean thickness of that simple path.
Graph Postprocession: Often small biological structures do not contribute much to the overall function. To further simplify the graph, we can remove edges and cycles whose path length is small relative to the total length of edges in . Thus, we collapse all edges with a length lower than a threshold value of . With bigger , we obtain a coarserlevel representation of the graph.
2.2 Topological Nomenclature Rules
Our nomenclature system is modified upon the IUPAC nomenclature of organic chemistry, which is not only invariant to the deformation and the graph indexing order but also easily convertible back to the graph representation. We add suffixes ito and idal to mitochondria and pyramidal neurons respectively.
Acyclic Graph: An acyclic graph is a graph having no cycles. A reduce graph can be entirely a tree structure or contain tree branches. The nomenclature rule for a tree is first to count the longest chain of vertexes, and assign a prefix based on the number of vertexes (e.g., the longest chain with vertexes has a prefix penta). Find the longest path in a general graph has been shown to be a complete problem [3], but find the longest path in an undirected tree graph can be solved efficiently by running the breadthfirst search (BFS) algorithm twice (see the supplementary material for detail). Therefore the rule makes sure that for acyclic graphs not only computer programs but also human users can efficiently drive the corresponding topological nomenclature. Then every vertex on the longest path is assigned a location number from to . For a branch, we use the location index as prefix and name the branch recursively based on the rules. For simplicity, we combine the prefix of branches with the same structure and omit the description for branch topology if a branch only contains one node.
Cyclic graph: If the reduced graph has circles, then we assign a higher priority to the circles and name the graph accordingly. For the graph structure with one circle, the prefix is cyclo. We then name branches use parentheses containing relative location on the ring together with the branch description described before. For a bicyclic graph where two circles share at least one vertexes, the root numeral prefix of the graph name depends on the total number of vertexes in all rings together[4]. The prefix bicyclo denote the sharing of at least two vertexes, while spiro denote the sharing of only one vertex. In between the prefix and the suffix, a pair of brackets with numerals denotes the number of vertexes between each of the bridgehead ones. These numbers are arranged in descending order and are separated by periods. For example, a graph with a 3vertex circle and a 5vertex circle share two vertexes (one edge) will be named (Fig. 2). Such rules can be easily extrapolated into graphs with more than three circles, and we refer the reader for more detail in the supplementary material.
2.3 Topological Nomenclature Embedding
We extract features from the graphs to calculate the similarity between them. For graph representation, we construct adjacency matrices of the graphs, whose elements are connectivities between nodes. We employ a variational graph autoencoder (VGAE)
[9]to extract features for each adjacency matrix. VGAE is a neural network for unsupervised learning on graphs based on a variational autoencoder
[8]. In VGAE, we first normalize the adjacency matrix using the symmetric normalization scheme. Then, we perform graph convolutions on the normalized adjacency matrix. Finally, VGAE reconstructs the adjacency matrix by adopting a fully connected layer. The network is trained to minimize the difference between the input adjacency matrix and the reconstructed one. We use the output of the graph convolutions for nomenclature embedding.3 Dataset
Data Acquisition: We imaged a tissue block from Layer II/III in the primary visual cortex of an adult rat at a resolution of using a multibeam scanning electron microscope. After stitching and aligning the 2D images on multiCPU clusters, we obtained a final 3D image stack of 100 cube.
Object Segmentation: We adopted the 3D UNet model [15] for initial automatic neuron segmentation and mitochondria segmentation. Then we used a manual annotation tool [1] to proofread the segmentation results.
JWRMito300: We reconstructed all the mitochondria found in the somata of 11 cells: one pyramidal neuron, six interneurons, and four glial cells. From all the fullysegmented mitochondria, we selected 316 of them that have nontrivial topological structures with a volumetric size larger than 0.2 (Fig. 3a).
JWRPyr30: We randomly selected 30 pyramidal cells whose cell bodies are located in the central volume with the presence of a significant portion (if not full) of their basal dendrites. Individual pyramidal neurons have one apical dendrite pointing to the pial surface and an axon often extending in the opposite direction. Nevertheless, they all show distinct distributions of oblique and basal dendrites (Fig. 3b).
4 Experiments
In this section, we first evaluate our nomenclature extraction result both quantitatively and qualitatively. Then we show two applications for 3D shape analysis with the extracted nomenclature for mitochondrion and pyramidal neurons. For quantitative evaluation of our nomenclature method, we asked neuroscientists to draw their imagined reduced graph when showing them with the original 3D object meshes. We then count the ratio of ‘correct’ graph based on the perception of experts. We found over 70% of the graphs are identical to the perception of experts. We refer readers for more details in the supplementary material.
4.1 3D Shape Retrieval
For a given query 3D shape, users may want to find similar shapes from the entire dataset. To this end, we perform 3D shape retrieval using the proposed topological nomenclature. The goal of this experiment is to find two topological shapes that are similar to the given query 3D shape. For the JWRMito300 dataset, we set every 3D shape as a query and discover its two nearest neighbors.
To compare two 3D shapes, we first compute pairwise differences between nomenclature embeddings from different 3D shapes by computing distances. Then, we determine a similarity between the two 3D shapes as a mean of matching costs. We use the Hungarian matching.
Figure 4 shows 3D shape retrieval results of the proposed algorithm compared to both HKS [16] and spectral embedding [11]. The results indicate that the proposed algorithm discovers topologically similar 3D shapes. In contrast, HKS find 3D shapes which have visually similar meshes but different actual neuronal or mitochondria structures. Since the spectral embedding encodes the entire graph, it fails to find relevant shapes.
4.2 3D Shape Decomposition
To understand the structures of 3D shapes, we decompose topological nomenclatures into subgraphs. To achieve this, we construct a dictionary of the proposed nomenclature embedding features. We apply means clustering algorithm to embedding features of junctions to generate words in the dictionary. We set as 50 and 100 for the pyramidal neurons and mitochondria, respectively. Note that we use only the junctions since end nodes have no local structures. In the inference phase of decomposition, we perform matching between junctions in a query nomenclature and the words in the dictionary. We first find a junction with the minimum distance, and then remove it and its neighbor nodes from the query nomenclature. We iterate this process until there are no more junctions.
5 Conclusion
In this paper, we proposed the topological nomenclature protocol for connectomics. We demonstrated the effectiveness of the proposed nomenclature system through shape retrieval and decomposition. We will make the two datasets containing 316 mitochondria and 30 pyramidal neurons publicly available. For future work, we will apply the proposed nomenclature scheme to a largescale dataset for understanding the diversity and similarity of biological structures.
References
 [1] (2018) VAST: efficient manual and semiautomatic labeling of large 3d image stacks. Frontiers in neural circuits 12. Cited by: §3.

[2]
(2017)
Volume segmentation using convolutional neural networks with limited training data
. In ICIP, Cited by: §1.  [3] (2009) Introduction to algorithms. MIT press. Cited by: §2.2.
 [4] (2013) Nomenclature of organic chemistry: iupac recommendations and preferred names 2013. Royal Society of Chemistry. Cited by: §2.2.
 [5] (2013) InChIthe worldwide chemical structure identifier standard. J. of cheminformatics 5 (1), pp. 7. Cited by: §1.
 [6] (2018) Highprecision automated reconstruction of neurons with floodfilling networks. Nature methods 15 (8), pp. 605. Cited by: §1.
 [7] (2015) Saturated reconstruction of a volume of neocortex. Cell 162 (3), pp. 648–661. Cited by: §1.
 [8] (2013) Autoencoding variational bayes. In ICLR, Cited by: §2.3.
 [9] (2016) Variational graph autoencoders. In NIPS Workshop on Bayesian Deep Learning, Cited by: §1, §2.3.
 [10] (1997) Compendium of chemical terminology. Vol. 1669, Blackwell Science Oxford. Cited by: §2.1.

[11]
(2002)
On spectral clustering: Analysis and an algorithm
. In Advances in neural information processing systems, pp. 849–856. Cited by: §4.1. 
[12]
(2009)
A computer vision approach to isometry invariant shape retrieval
. In IEEE ICCV Workshops, Cited by: §1.  [13] (2014) A sequential 3d curvethinning algorithm based on isthmuses. In Advances in Visual Computing, Cited by: §2.1.
 [14] (1993) A guide to iupac nomenclature of organic compounds. Blackwell Scientific Publications, Oxford. Cited by: §1.
 [15] (2015) Unet: convolutional networks for biomedical image segmentation. In MICCAI, Cited by: §3.
 [16] (200907) A concise and provably informative multiscale signature based on heat diffusion. Comput. Graph. Forum 28. Cited by: §4.1.
 [17] (2003) Skeleton based shape matching and retrieval. In Proceedings of the Shape Modeling International, Cited by: §1.
 [18] (2018) A complete electron microscopy volume of the brain of adult drosophila melanogaster. Cell 174 (3). Cited by: §1.
Comments
There are no comments yet.