A convolutional neural network for teeth margin detection on 3-dimensional dental meshes

07/07/2021 ∙ by Hu Chen, et al. ∙ 0

We proposed a convolutional neural network for vertex classification on 3-dimensional dental meshes, and used it to detect teeth margins. An expanding layer was constructed to collect statistic values of neighbor vertex features and compute new features for each vertex with convolutional neural networks. An end-to-end neural network was proposed to take vertex features, including coordinates, curvatures and distance, as input and output each vertex classification label. Several network structures with different parameters of expanding layers and a base line network without expanding layers were designed and trained by 1156 dental meshes. The accuracy, recall and precision were validated on 145 dental meshes to rate the best network structures, which were finally tested on another 144 dental meshes. All networks with our expanding layers performed better than baseline, and the best one achieved an accuracy of 0.877 both on validation dataset and test dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With development of digital dentistry, optical scanning of dental models became essential for computer aided design (CAD) and computer aided manufacture (CAM) of dental prosthetics [1, 2, 3, 4] and for digital orthodontics [5], for example, digital indirect bonding technique [6], production of vacuum formed invisible aligners [7]. The optical scanners, based on principle of triangle measuring method, acquire model surface point cloud data and convert them to mesh (always triangular mesh). The mesh data contains vertex coordinates and their connections (edges and facets). Teeth in the mesh should be segmented before design of dental prosthetics or make virtual orthodontic setup, which may take enormous and strenuous manual labor.

There are several kinds of classical algorithm (without machine learning) for mesh segmentation

[8, 9, 10], such as region growing[11, 12, 13, 14], watershed segmentation[16, 17, 18, 19],contour-based mesh segmentation[15]

, hierarchical clustering

[20, 21, 22, 23, 24], iterative clustering[25, 26, 27, 28, 29], spectral analysis[30, 31] and implicit methods (define the boundaries between sub-meshes)[32, 33, 34, 35, 36]

, which are not reliable in complex cases and not easy to extend. With the development of artificial intelligence, machine learning methods

[37, 38, 39, 40, 41, 42, 43]

were applied for mesh segmentation. Some researches collected hundreds of vertex or facet features to do classification on support vector machine or neural network

[37, 43], while the features are complex to define and calculate. Several recent researches aimed to take mesh as raw input and do segmentation work by converting to voxel for 3-dimentional (3D) convolutional neural network (CNN)[44, 45, 46], or projection to 2-dimentiional (2D) images for 2D CNN[39, 47], or direct point clouds deep learning[38, 41]. However, the innate vertex connections in the mesh were ignored in those methods, which is very important for the cognition of mesh shape and structure. Here we propose an artificial neural network named mesh CNN, where only a few features for each vertex need to be precalculated, and the mesh vertex feature matrix serve as raw input without voxelization or projection. In this model, the topological connections between mesh vertexes were utilized to produce an adjacent matrix, with which neighborhood features for each vertex were involved for calculate, expanding receptive filed iteratively, and finally, vertex features after convolution were sent to soft-max layer. We challenged clinical dentition plaster casts segmentations and achieved good performances.

Figure 1: Annotations of teeth margins on dentition meshes, occlusal view.

In recent years, machine learning has been widely applicated in the field of mesh segmentations. Kalogerakis et al.[48] presented a data-driven approach to simultaneous segmentation and labeling of parts in 3D meshes based on Conditional Random Field model, where they achieved improvement on the Princeton Segmentation Benchmark. Guo et al.[43]

trained CNNs by a large pool of classical mesh facet geometric features, to initialize a label vector for each triangle indicating its probabilities of belonging to various object parts, but connections between mesh facets were unable to be concerned in these CNNs.Xu et al.

[49] constructed 2-level hierarchical CNNs structure for tooth segmentation, consuming a set of geometry features (600-dimension) as face feature representations to train the networks. George et al.[37]

used multi-scale features calculated as the mean of a set of neighboring facets to train multi-branch 1D CNNs, thus to classify a facet by its multi scale local area features. However, hundreds of geometric and contextual features need to be defined and calculated aforehand for their networks.

Wu et al.[44]

proposed 3D shapeNets, which represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid based on Convolutional Deep Belief Network. Except transferring to voxel, multi-view images were also used to represent 3D mesh geometric information. Le et al.

[47]

introduced a multi-view recurrent neural network approach for 3D mesh segmentation, where the convolutional neural networks and long short term memory were combined to generate the edge probability feature map and their correlations across different views. Qi et al.

[46] combined the volumetric representations and multi-view representations, with introducing of architectures of volumetric CNNs, as well as introducing of multi-resolution filtering in 3D into multi-view CNNs, to provide better methods available for object classification on 3D mesh data. However, transferring mesh to voxel grids of different view or to images may produce unnecessarily intermediate steps and raise issues. PointNet[41] was designed by Qi et al. for applications of 3D object classification, part segmentation, and scene semantic parsing, which directly consumes point clouds, and respects the permutation invariance of points in the input. PointNet was further developed to be PointNet++[38], which applies PointNet recursively on a nested partitioning of the input pointset, to learn deep point set features efficiently and robustly. Different from point clouds, the teeth mesh data have vertex connections represented by the edges of facets. If treated as point clouds, the geometric connections between mesh vertex, which is very important to retrieve the mesh topological structures, will lost. Thus, in this research, we proposed an architecture of a neural network named mesh CNN that exploits the edge connection information to compute local area features on mesh data, and finally implement the vertex classification.

2 Proposed Approach

We constructed a neural network for vertex detection, where an expanding layer was proposed, which utilized the connection information of the triangle meshes to generate links between vertexes, concatenating features of one vertex to that of its adjacent vertexes. CNN was applied to the concatenated features, after serval repeat of the expanding layer and CNN layer architectures, probability of labels for each vertex was finally output. The details of expanding layer are described below. In a mesh with vertexes and edges, all vertexes () and edges () can be defined as:

(1)
(2)

For any vertex , the vertexes directly connected to are those on the same edge with , which can be defined as:

(3)

Further, for ,

(4)

In this research, we define ring 0 neighbor vertexes of as its self:

(5)

Ring-1 neighbor vertexes of as vertexes directly connected to :

(6)

Ring-2 neighbor vertexes of as vertexes directly connected to ring 1 neighbor vertexes of , excluding ring 1 neighbor vertexes and :

(7)

For , ring neighbor vertexes of as vertexes directly connected to ring neighbor vertexes of , excluding ring neighbor vertexes and inner vertexes:

(8)

It can be easily inferred that ring n neighbor vertexes will not directly connect to ring and inner vertexes, that is:

(9)

So, the equation 8 can be simplified as:

(10)

For a vertex , there might be several features, such as coordinates , curvatures, distances, which can be presented as:

(11)

For , we define the mean features of vertexes in as:

(12)

where is the number of vertexes in . Consider input of a mesh matrix with all vertexes, each vertex with features:

(13)

We can calculate its mean ring neighbor features:

(14)

where contain some context information of mesh connections, and it can be easily inferred that . The expanding layer, take mesh feature matrix as input, and output an augmented matrix with means of several ring neighbor features:

(15)

Here is an instance of matrix computation algorithm to implement the function. Firstly, we constructed an adjacent matrix , which represent the connection relationship between mesh vertexes:

(16)

Secondly, construct a new matrix to collect ring neighbor features:

(17)

Finally, construct another new matrix computing means of ring neighbor features:

(18)

Then, the constructed matrix will be equal to output of expanding layer, as defined in equation 15:

(19)

The input features can also be expanded to non-continuous ring number neighbor features, of which the result is a slice of , for example:

(20)
(21)

The operations of expanding layer and convolution layer were illustrated in Figure 2. All other vertices besides the red one will be set as center vertex and subjected to the same operation pipelines to update their feature matrices. The updated feature matrix contain learned information (by CNNs) from ring neighbor vertices, thus is able to extract local area features automatically. Once all vertices in a mesh updated their feature matrices, next iteration of expanding and convolution operations can be scheduled to extract deeper features.

Figure 2: Illustration of operations in expanding layer and convolution layer

Based on expanding layers and convolution layers, 6 kinds of networks were constructed, of which a network without expanding layer was set as baseline, as in Table 1.

Baseline A B C D E
Input matrix, of which
FC-128
FC-512
FC-128
FC-2
Softmax
Table 1: Proposed architecture of mesh CNNs

3 Experiment

We collected 1445 dental meshes, the count of vertex in each mesh was controlled to be less than 6000, due to the limitation of our computation resources on a 8GB graphics card. Meshes were saved as .obj format, recording the vertical coordinates and its RGB color values, as well as vertex connections that composition triangle facets. Meshes were annotated that the teeth margin vertexes and inter-teeth vertexes were brushed to be red (Figure 1), that is, labeled as positive . All other vertexes were labeled as negative .

Classifier Network Training Features Validation
Accuracy Precision Recall
1 Baseline Curvatures (), 0.766 0.670 0.455
2 A Curvatures (), 0.863 0.786 0.756
3 B Curvatures (), 0.870 0.784
4 C Curvatures (), 0.857 0.784 0.733
5 E Curvatures (), 0.874 0.794 0.787
6 D Curvatures (), 0.789
7 D 0.807 0.695 0.649
8 D , Curvatures (), 0.866 0.783 0.773
Table 2: Experiment results of trained mesh CNNs on validation dataset

For each vertex, 8 features were collected, including coordinates (), curvatures () and the mean neighbor distance (). Curvatures () are essential for description of the 3-dimensional morphology of meshes, and both curvatures and distance have shift-invariance and rotate-invariance properties. The mean neighbor distance () of a vertex was defined here as mean distance between and its ring 1 neighbor vertexes:

(22)

where are coordinates of vertex , and are coordinate of vertex . The dataset was divided to be train dataset of 1156 meshes, validation dataset of 145 meshes and test dataset of 144 meshes.

In this research, networks should be trained by labeled data before they have ability to do classifications. A trained network here was specified as a Classifier. Networks shown in Table 1 were trained with different input features, as shown in Table 2

, where classifier 1 to 6 were designed to test the performance of different network architectures, while 6 to 8 were designed to test the influence of different combination of input features on the outcomes. The loss function was defined upon weighted cross entropy with logits:

(23)

where the was set to be 3.0, and was used to minimize the , with initial

of 0.01, and decent to be 0.003 after 5000 train iterations, and 0.001 after 10000. This work was implemented on Tensorflow, and a total of 11500 train steps were performed. During training, the network was evaluated on validation dataset after every 500 iterations. Accuracy, precision and recall were calculated:

(24)
(25)
(26)

where represent positive vertexes, i.e. vertexes that were predicted to be teeth margin vertexes, present negative vertexes, i.e. background vertexes. represent true positive vertexes, true negative vertexes, false negative vertexes, and false negative vertexes. Validation results were illustrated in Figure 3. Accuracy, precision and recall were all increased rapidly and reached a high level after about 4000 iterations, except that the recall of classifier 1 with baseline architecture dropped during the initial increase of iterations.

The final results were also shown in Table 2, of which the network with best validation results was chosen to be tested on test dataset.

Predicted margin vertexes were shown in Figure 4. Classifier 1, baseline network without expanding layer, worked like threshold segmentation, where areas with high curvature value were predicted as margin vertexes, producing false positive annotations in pit and fissure areas on occlusal surfaces and palatal wrinkle areas on upper jaws. Classifier 7 received raw vertex coordinates as input, roughly outlined the margin areas with obvious patch areas (false positive predictions) and gaps (false negative predictions). Gingiva margin areas were well predicted by classifier 2, 3, 4, 5, 6, 8, although there tend to be few false negative predictions in flat margin areas, and false positive predictions in rugged non-margin areas.

Figure 3: Accuracy, precision and recall on validation dataset during train iterations
Figure 4: Ground truth and predicted margin vertexes (red) by classifier 1-8 on test dataset

The accuracy, precision and recall of classifier 1-8 on validation dataset was shown in Table 2. Without expanding layers, the baseline reached the lowest accuracy of 0.766, as well as the lowest precision of 0.670 and recall of 0.455. The order of accuracy between networks were: , and between features were: . Classifier 6, which is network trained with curvatures and mean neighbor distances (), achieved highest accuracy and precision among all the 8 classifiers. The recall of classifier 6 also reach 0.789, which is only 0.001 lower than the highest one achieved by classifier 3. The classifier 7, taking raw coordinates as train features, got lowest accuracy among classifier 2-8, but still higher than baseline.

Since classifier 6 performed best among classifier 1-8, it was selected to predict vertex labels on test dataset. Results shown that the accuracy, precision and recall were 0.877, 0.782 and 0.804, respectively, which was close to the values predicted on the validation dataset (by classifier 6).

4 Discussion

Expanding layer here align vertex features with its neighbor vertex features, and then subjected to convolution layer, multiplied by convolution kernels, to expand the receptive field. The linkage between vertexes defined by the mesh edges and facets were rarely considered in previous studies, which is very important to revel the topological structure of mesh. However, the number of adjacent vertexes varied from point to point, which could not be presented by a matrix as in 2 dimensional images, thus convolution was not available. In this work, adjacent relationship was presented as ring neighbor vertexes linkages, and each ring neighbor vertexes’ features were reduced to mean, equivalent to features of one vertex in number. After expanded the input features to specified number of ring neighbors, a new feature matrix with adjacent feature information was constructed. Then the expanded feature matrix passed through a conv layer to learn the reduced ring neighbor feature pattern, and output feature matrix with the same shape as initial input, which could pass though expanding layer plus conv layer iteratively to enlarge its receptive field.

Expand to further ring neighbors will expand receptive field efficiently, but may miss some nearby information. Classifier 2 with expanding to ring 0, 2 and 4 performed better than classifier 1 (expanding to ring 0, 1 and 2) and classifier 3 (expanding to ring 0, 4 and 8), implies moderate expanding is preferred. What’s more, a combination of different expanding distances in classifier 6 and 7 gained further improvement. After the last 3 expanding plus conv layer blocks replaced by fully connected layers, classifier 7 performed slightly inferior than classifier 6. This might be explained that fully connected layers didn’t concern local mesh vertex topological structures as expanding plus conv layers did. With regarding to the input mesh vertex features, it seems that the curvature and distance features are better than coordinates feature to train our mesh CNN. This may because curvatures and distances have shift-invariance and rotate-invariance properties, which will not be affected by the inconsistent of pose or position of the teeth scanned.

In this work, negative samples were much more than positive samples (teeth margin vertexes), although we brushed a thick rough line at teeth margin area to mark more vertexes as positive samples, the ratio was still about 1:3 for positive: negative. The loss function was defined upon weighted cross entropy with logits to balance the samples. If not weighted, this unbalanced distribution may affect the performances of our networks, with potential to assign all vertices with negative labels to achieve a local optimum.

There are ways to shrinkage the thick rough teeth margin strip areas to be lines, such as grassfire algorithm, where a continuous gingival strip area without gaps is required. With classifier 6 in this study, there is still few gaps in flat areas. However, this kind of gaps can be easily filled interactivity with little manual labor.

Although convolution of mesh vertex features was implemented in this work, subsampling and upsampling were not covered, which is very important for semantic segmentation of mesh. The subsampling and upsampling will come down to remeshing, which will be investigated in our further work.

5 Conclusion

In this research, we constructed an end-to-end neural network which is capable to directly take mesh feature data and vertex connection information as input, and achieved good performance on teeth mesh vertex classification. The expanding layers designed to extract local vertex features can significantly improve the classification result, and the strategy of assembly of expanding layers with different reach sizes can be properly investigated to achieve better performance. Mesh vertex features with shift-invariance and rotate-invariance properties are preferred to train our mesh CNN.

acknowledgements

This study was funded by the National Natural Science Foundation of China (No. 51705006), and open fund of Shanxi Province Key Laboratory of Oral Diseases Prevention and New Materials (KF2020-04).

Conflict of interest

The authors declare that they have no conflict of interest.

References

  • [1] R. Pradeep, M. Narasimman, and J. M. Philip, “Digital dentistry–cad cam–a review,” J. Clin. Dent. Upd. Res., vol. 7, no. 1, pp. 18–22, 2016.
  • [2] A. Zandinejad, W. S. Lin, M. Atarodi, T. Abdel-Azim, M. J. Metz, and D. Morton, “Digital workflow for virtually designing and milling ceramic lithium disilicate veneers: A clinical report,” Oper. Dent., vol. 40, no. 3, pp. 241–246, 2015.
  • [3] C. Runkel, J.-F. Güth, K. Erdelt, and C. Keul, “Digital impressions in dentistry—accuracy of impression digitalisation by desktop scanners,” Clin. Oral Invest., vol. 24, pp. 1249–1257, 2020.
  • [4] T. Joda, M. Ferrari, G. O. Gallucci, J.-G. Wittneben, and U. Brägger, “Digital technology in fixed implant prosthodontics,” Periodontol. 2000, vol. 73, no. 1, pp. 178–192, 2017.
  • [5] A. IGNA, C. TODEA, E. OGODESCU, A. PORUMB, B. DRAGOMIR, C. SAVIN, and S. ROŞU, “Digital technology in paediatric dentistry and orthodontics.” Int. J. Med. Dent., vol. 8, no. 2, pp. 61–68, 2018.
  • [6] M. E. A. Duarte, B. F. Gribel, A. Spitz, F. Artese, and J. A. M. Miguel, “Reproducibility of digital indirect bonding technique using three-dimensional (3d) models and 3d-printed transfer trays,” Angle Orthod., vol. 90, no. 1, pp. 92–99, 2019.
  • [7] A.-I. ANDRONIC, “Vacuum formed active appliances as an innovative technique of orthodontic treatment.” Acta Medica Transilvanica, vol. 22, no. 4, pp. 116–118, 2017.
  • [8] A. Shamir, “A survey on mesh segmentation techniques,” Comput. Graph. Forum, vol. 27, no. 6, pp. 1539–1556, 2008.
  • [9] R. S. Rodrigues, J. F. Morgado, and A. J. Gomes, “Part‐based mesh segmentation: A survey,” Comput. Graph. forum, vol. 37, no. 6, pp. 235–274, 2018.
  • [10] P. Theologou, I. Pratikakis, and T. Theoharis, “A comprehensive overview of methodologies and performance evaluation frameworks in 3d mesh segmentation,” Comput. Vis. Image Und., vol. 135, pp. 49–82, 2015.
  • [11] R. S. V. Rodrigues, J. F. M. Morgado, and A. J. P. Gomes, “A contour-based segmentation algorithm for triangle meshes in 3d space,” Comput. Graph., vol. 49, pp. 24–35, 2015.
  • [12] D. Holz and S. Behnke, “Approximate triangulation and region growing for efficient segmentation and smoothing of range images,” Robot. Auton. Syst., vol. 62, no. 9, pp. 1282–1293, 2014.
  • [13] F. Bergamasco, A. Albarelli, and A. Torsello, “A graph-based technique for semi-supervised segmentation of 3d surfaces,” Pattern Recogn. Lett., vol. 33, no. 15, pp. 2057–2064, 2012.
  • [14]

    G. Lavoué, F. Dupont, and A. Baskurt, “A new cad mesh segmentation method, based on curvature tensor analysis,”

    Comput. Aided Design, vol. 37, no. 10, pp. 975–987, 2005.
  • [15] R. S. V. Rodrigues, J. F. M. Morgado, and A. J. P. Gomes, “A contour-based segmentation algorithm for triangle meshes in 3D space,” Comput. Graph., vol. 49, pp. 24–35, 2015.
  • [16] W. Benjamin, A. W. Polk, S. V. N. Vishwanathan, and K. Ramani, “Heat walk: Robust salient segmentation of non-rigid shapes,” Comput. Graph. Forum, vol. 30, no. 7, pp. 2097–2106, 2011.
  • [17] L. Chen and N. D. Georganas, “An efficient and robust algorithm for 3d mesh segmentation,” Multimed. Tools Appl., vol. 29, no. 2, pp. 109–125, 2006.
  • [18] E. Zuckerberger, A. Tal, and S. Shlafman, “Polyhedral surface decomposition with applications,” Comput. Graph., vol. 26, no. 5, pp. 733–743, 2002.
  • [19] P. M. A. and T. W. R., “Partitioning 3d surface meshes using watershed segmentation,” IEEE Vis. Comput. Gr., vol. 5, no. 4, pp. 308–321, 1999.
  • [20] H. Zhang, C. Li, L. Gao, S. Li, and G. Wang, “Shape segmentation by hierarchical splat clustering,” Comput. Graph., vol. 51, pp. 136–145, 2015.
  • [21] D.-M. Yan, W. Wang, Y. Liu, and Z. Yang, “Variational mesh segmentation via quadric surface fitting,” Comput. Aided Design, vol. 44, no. 11, pp. 1072–1082, 2012.
  • [22]

    M. Reuter, “Hierarchical shape segmentation and registration via topological features of laplace-beltrami eigenfunctions,”

    Int. J. Comput. Vision

    , vol. 89, no. 2, pp. 287–308, 2010.
  • [23] F. De Goes, S. Goldenstein, and L. Velho, “A hierarchical segmentation of articulated bodies,” Comput. Graph. Forum, vol. 27, no. 5, pp. 1349–1356, 2008.
  • [24] K. Inoue, T. Itoh, A. Yamada, T. Furuhata, and K. Shimada, “Face clustering of a large-scale cad model for surface mesh generation,” Comput. Aided Design, vol. 33, no. 3, pp. 251–261, 2001.
  • [25] J. Zhang, J. Zheng, C. Wu, and J. Cai, “Variational mesh decomposition,” ACM T. Graphic., vol. 31, no. 3, p. 21, 2012.
  • [26] J. Wu Leif Kobbelt, “Structure recovery via hybrid variational surface approximation,” Comput. Graph. Forum, vol. 24, no. 3, pp. 277–284, 2005.
  • [27] D. Julius, V. Kraevoy, and A. Sheffer, “D-charts: Quasi-developable mesh segmentation,” Comput. Graph. Forum, vol. 24, no. 3, pp. 581–590, 2005.
  • [28] D. Cohen-Steiner, P. Alliez, M. Desbrun, D. Cohen-Steiner, P. Alliez, and M. Desbrun, “Variational shape approximation,” ACM T. Graphic., vol. 23, no. 3, pp. 905–914, 2004.
  • [29] S. Shlafman, A. Tal, and S. Katz, “Metamorphosis of polyhedral surfaces using decomposition,” Comput. Graph. Forum, vol. 21, no. 3, pp. 219–228, 2002.
  • [30] R. Liu and H. Zhang, “Mesh segmentation via spectral embedding and contour analysis,” Comput. Graph. Forum, vol. 26, no. 3, pp. 385–394, 2007.
  • [31] Z. Karni and C. Gotsman. (2000). Spectral compression of mesh geometry. Presented at 27th annual conference on Computer graphics and interactive techniques. [Online]. Avaible: https://dl.acm.org/doi/abs/10.1145/344779.344924
  • [32] H. Wang, T. Lu, O. K.-C. Au, and C.-L. Tai, “Spectral 3d mesh segmentation with a novel single segmentation field,” Graph. Models, vol. 76, no. 5, pp. 440–456, 2014.
  • [33] H. Benhabiles, G. Lavoué, J.-P. Vandeborre, and M. Daoudi, “Learning boundary edges for 3d-mesh segmentation,” Comput. Graph. Forum, vol. 30, no. 8, pp. 2170–2182, 2011.
  • [34] Y. Zheng and C.-L. Tai, “Mesh decomposition with cross-boundary brushes,” Comput. Graph. Forum, vol. 29, no. 2, pp. 527–535, 2010.
  • [35] A. Golovinskiy and T. Funkhouser. (2008). Randomized cuts for 3d mesh analysis. Presented at ACM SIGGRAPH Asia 2008 papers. [Online]. Available: https://dl.acm.org/doi/abs/10.1145/1457515.1409098
  • [36] J. Schreiner, A. Asirvatham, E. Praun, and H. Hoppe, “Inter-surface mapping,” ACM T. Graphic., vol. 23, no. 3, pp. 870–877, 2004.
  • [37] D. George, X. Xie, and G. K. Tam, “3d mesh segmentation via multi-branch 1d convolutional neural networks,” Graph. Models, vol. 96, pp. 1–10, 2018.
  • [38] C. R. Qi, Y. Li, S. Hao, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Adv. Neur. In., pp. 5099–5108, 2017.
  • [39] E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri, “3d shape segmentation with projective convolutional networks,” Proc. CVPR IEEE, pp. 3779–3788, 2017.
  • [40] L. Yi, H. Su, X. Guo, and L. J. Guibas, “Syncspeccnn: Synchronized spectral cnn for 3d shape segmentation,” Proc. CVPR IEEE, pp. 2282–2290, 2017.
  • [41] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” Proc. CVPR IEEE, pp. 652–660, 2017.
  • [42] T. P., P. I., and T. T., “Unsupervised spectral mesh segmentation driven by heterogeneous graphs,” IEEE T. Pattern Anal., vol. 39, no. 2, pp. 397–410, 2016.
  • [43] K. Guo, D. Zou, and X. Chen, “3d mesh labeling via deep convolutional neural networks,” ACM T. Graphic., vol. 35, no. 1, pp. 1–12, 2015.
  • [44] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” Proc. CVPR IEEE, pp. 1912–1920, 2015.
  • [45] M. D. and S. S., (2015). Voxnet: A 3d convolutional neural network for real-time object recognition.Presented at 2015 IEEE/RSJ International Conference on IROS. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/7353481/
  • [46] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, “Volumetric and multi-view cnns for object classification on 3d data,” Proc. CVPR IEEE, pp. 5648–5656, 2016.
  • [47] T. Le, G. Bui, and Y. Duan, “A multi-view recurrent neural network for 3d mesh segmentation,” Comput. Graph., vol. 66, pp. 103–112, 2017.
  • [48] E. Kalogerakis, A. Hertzmann, and K. Singh, “Learning 3d mesh segmentation and labeling,” ACM T. Graphic., vol. 29, no. 4, pp. 1–12, 2010.
  • [49] X. Xu, C. Liu and Y. Zheng, “3D Tooth Segmentation and Labeling Using Deep Convolutional Neural Networks,” IEEE T VIS COMPUT GR, vol. 25, no. 7, pp. 2336–2348, 2019.