Automatic Registration between Cone-Beam CT and Scanned Surface via Deep-Pose Regression Neural Networks and Clustered Similarities

07/29/2019 ∙ by Minyoung Chung, et al. ∙ Soongsil University Harvard University Seoul National University 4

Computerized registration between maxillofacial cone-beam computed tomography (CT) images and a scanned dental model is an essential prerequisite in surgical planning for dental implants or orthognathic surgery. We propose a novel method that performs fully automatic registration between a cone-beam CT image and an optically scanned model. To build a robust and automatic initial registration method, our method applies deep-pose regression neural networks in a reduced domain (i.e., 2-dimensional image). Subsequently, fine registration is performed via optimal clusters. Majority voting system achieves globally optimal transformations while each cluster attempts to optimize local transformation parameters. The coherency of clusters determines their candidacy for the optimal cluster set. The outlying regions in the iso-surface are effectively removed based on the consensus among the optimal clusters. The accuracy of registration was evaluated by the Euclidean distance of 10 landmarks on a scanned model which were annotated by the experts in the field. The experiments show that the proposed method's registration accuracy, measured in landmark distance, outperforms other existing methods by 30.77 addition to achieving high accuracy, our proposed method requires neither human-interactions nor priors (e.g., iso-surface extraction). The main significance of our study is twofold: 1) the employment of light-weighted neural networks which indicates the applicability of neural network in extracting pose cues that can be easily obtained and 2) the introduction of an optimal cluster-based registration method that can avoid metal artifacts during the matching procedures.



There are no comments yet.


page 1

page 2

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Computerized registration of medical images is a challenging problem for many clinical applications (e.g., follow-up studies, surgical planning, and augmented reality [1, 2, 3, 4]). In dental application, registration between maxillofacial cone-beam computed tomography (CT) images and a scanned model is an essential prerequisite in surgical planning for dental implants or orthognathic surgery [5, 6, 7, 8, 9, 10, 11, 12, 6, 3, 13]. Rigid transformation, which has a relatively small number of parameters, can be a viable method in the application. However, the registration problem between two different imaging protocols (i.e., optically scanned surface model and CT image) is a challenging task [6]. Moreover, the metal artifacts in cone-beam CT images hinder the accuracy of the registration [14]. Manual registration or initial registration between CT and scanned dental model is a time-consuming task. Most of the clinical applications require the three points for the initial registration. The term “three-points” refers to the registration using manual three points depiction. Thus, even the state-of-the-art clinical applications relying on the three-points step cannot be fully automated.

Many methods have been proposed to address the rigid registration problem in medical imaging applications. Intensity-based methods attempt to attain optimal registration parameters (i.e., transformation matrix) by intensity similarity criteria [15]. On the other hand, feature-based methods attempt to extract useful features (e.g., spatial points or descriptors) to match correspondences [15]. Iterative closest points (ICP) method [16, 17, 18] is widely used to perform point-to-point registrations. In dental field, the ICP method is an effective approach that can be applied to many applications such as real-time registration of optical scanning [19]. However, owing to the different protocols of the target and moving images, the ICP method is not directly applicable to the CT-model registration task. Some studies have been proposed to extract corresponding surface points from CT to apply ICP method with and without features [20, 21, 22, 23, 24, 6, 25]. Markers (i.e., manual landmarks) have also been used to perform registration between CT and optically scanned dental model [26, 27, 28, 29, 30, 31, 14]. It is also clearly known that registration performance can be increased by confident region priors [32]. The major limitations of the previously proposed methods are twofold: 1) initial registration is either not automated or lacks robustness and 2) the performance of matching is seriously affected by the extent of metal artifacts. The performance cannot guarantee robustness in the majority of cases due to the presence of metal artifacts.


Initial registration framework. The input mesh model and CT image are projected to 2D images. We apply principal component analysis to the mesh model to obtain the main axis of projection. Subsequently, a synthetic depth image is generated using the model by defining a tight bounding plane. In the case of the CT image, we project maximum intensities with respect to the x-axis. The two 2D images are then fed to the regression neural networks to acquire alignment cues, i.e., point (red dot) and line (green arrow). Finally, the mesh model and CT image are initially matched with point and line pairs.

(b) Fine registration using optimal clusters. From the initially matched state, local clusters are generated on a surface model for individual registration based on clusters. After local optimization, optimal clusters are selected based on the “voting” results among cluster candidates. The final clusters are then used for final registration.
Fig. 1: The overall architecture of the proposed algorithm. (a) The initial registration procedure using deep-pose regression neural networks. (b) Subsequent fine registration via clustered similarities. Best viewed in color.

In recent years, many studies have used deep neural network to resolve automatic registration problems in the field [33, 34, 35, 36, 37, 38, 39]

. Some proposed unsupervised learning framework with certain similarity criteria (e.g., intensity and landmarks) and others proposed supervised

[35, 36] or weakly supervised neural network [39] to perform registration. However, most of the previous works rely on intensity-based similarity or suffer from the deficiency of annotated data. The lack of ground-truth data makes it hard to train the neural network in a fully supervised manner. As aforementioned, although deep neural networks have been actively applied to various dental applications [40], the manual depiction of three-points is still required for registration in the clinics.

In this paper, we propose a novel and robust method to perform fully automatic registration between cone-beam CT images and the scanned dental model without any fragile priors (e.g., iso-surface extractions and ICP). We first extract the alignment cues (i.e., pose) from the given CT and model via deep convolutional neural networks (CNNs) followed by the rough model alignment. Finally, fine matching is performed by optimal clusters that are obtained according to the similarity measurement. The key achievements of our method are: fully automated initial alignment and robust fine registration results independent of the presence of metal artifacts in the CT images.

The remainder of this paper is structured as follows. In Section 2, we will describe our proposed method in detail. Section 3, 4, and 5 illustrate the experimental results, discussion, and conclusion, respectively.

Ii Methodology

Our method consists of two steps: 1) deep-pose regression and 2) optimal cluster-based matching (Fig. 1). The first step aligns the model to CT in rough conditions (i.e., initial registration). The second matching procedure performs fine registration via optimal clusters. The details of each method are described in the following sections.

(a) Schematic view of cluster transformations and coherency. The local optimal transformation of cluster (i.e., ) is applied identially to other clusters for coherence error calculation in (6). and are the first and second minimum coherency error-valued clusters.
(b) The red line (i.e.,

) represents the maximum coherency vector (i.e., minimum coherency error vector derived from (

(c) The red line (i.e., ) represents the minimum coherency vector (i.e., ) achieved by (8).
Fig. 2: (a) shows the schematic view of cluster transformations. Each dotted oval represents transformed cluster obtained as a result of applying locally optimized transformations. (b) and (c) demonstrates the coherency evaluation with respect to 2D rotational transformation. Base vector represents every cluster for simplicity. Each red line (i.e., ) represents the given transformed vector. The green lines are the first and second transformed vectors that are most similar to each vector (i.e., and in (a)). The coherency error can be calculated as . Best viewed in color.

Ii-a Deep-Pose Regression

In this step, initial registration between CT and the model is performed. Directly using 3-dimensional (3D) information is too complex and noise-dependent. Therefore, we reduced the dimension to 2D for a robust initial alignment. Let CT image , where (). Maximum intensity projection (MIP) image with respect to x-axis direction, , is generated from (indicated by yellow arrows in Fig. (a)a). As for the scanned model, a synthetic depth image is generated for the main axis (i.e., full-arch visible axis). The main axis can be easily obtained by principal component analysis (PCA) [41, 42]. In a given triangular mesh model , where and are sets of vertices and edges, let a 3D vector be a positional vector in set . Defining the mean vector by , the covariance matrix of an input data can be defined by

. PCA analysis can be subsequently performed via eigen-decomposition or singular value decomposition

[41, 43, 44]:



are the eigenvalues of

(). Depth image, , is then generated by projecting all vertices to a tight bounding plane that has as a normal vector. Images are normalized to the range of [0..1]. Then we use the trained CNN models to acquire corresponding point and line pairs for each image (Fig. (a)a). The point and line pairs are subsequently reconstructed (i.e., positioned) in the 3D domain. Since the projected bounding plane, with respect to the scanned model, is originally defined in a 3D domain, the points and lines in are automatically positioned in the 3-dimensional space. In case of , the x-coordinate is set to (i.e., the center point for x-axis) where is the width of an input CT image. Finally, the model is spatially matched to the 3D CT image with 3D points and lines (i.e., overlapping the bounding plane of the model and a corresponding plane in a CT which is sliced by a vector in (Fig. (a)a)). Whether the scanned model is the upper jaw (i.e., maxilla) or the lower jaw (i.e., mandible) is given as a prior.

For training, we used scanned models and CT images. We manually annotated (i.e., point and angle of the line) images for and . The overall loss is formulated as follows.


where , p, and are the input 2D image, ground-truth 2D point and angle of the line, respectively. represents the weights of the network, and are the network outputs. The network is trained according to the weighting parameters and . For training and inference, we used the traditional VGG-16 network [45]

with a slight modification in the final layer to output a 3D tensor (i.e., a 2D point and an angle). Two identical neural networks were used to train each projection image (i.e.,

and ). The only difference is the final output tensor which is 6D for (i.e., a pair of a 2D point and an angle). ‘Xavier’ initialization [46] was used for initializing all the weights of the network. While training the network, we fixed the loss parameter as . We used the Adam optimizer [47]

with batch size 64 and set the learning rate to 0.001. We decayed the learning rate by multiplying 0.1 for every 20 epochs. We trained the network for 100 epochs using an Intel i7-7700K desktop system with 4.2 GHz processor, 32 GB of memory, and Nvidia Titan Xp GPU machine. It took 1h to complete all the training procedures.

Ii-B Optimal Cluster-based Matching

Cluster is defined by the center vertex and radius , i.e., connected local vertices in constrained by centered at . We use the notation in the remaining text for simplicity. Multiple clusters are automatically generated after initial registration (Fig. (b)b). We limited the minimum distance among the center of clusters to (i.e., allowing overlaps). Since we already know a projection vector of a given mesh model, we prioritized the vertices that are located in the crown region (i.e., lower values in image (Fig. (b)b)). Finally, we additionally added clusters in a stochastic manner. We added three clusters for each cluster by randomly positioning the center distanced less than mm with

random rotations for each axis. The presented stochastic procedure enhances the stability of the results by means of improving the probability of the accurate local matching performance.

Local optimization (i.e., registration) is performed for each cluster (, where is the number of clusters) according to the vector alignment-based similarity criteria:


where is a normal vector at vertex v, is a gradient at , and is a transformation with respect to the parameters, . We used downhill simplex method [48] for local cluster optimization until convergence within six dimensional parameters (i.e., three dimensional parameters for each translation and rotation; ):

Input: , , , and .
Data: Set of clusters,
initialize , , ;
for , where  do
end for
while  do
       Find_Outlying_Cluster() ;
end while
Algorithm 1 Selecting Three Optimal Clusters.

After performing local optimizations of the clusters, we have scattered transformation matrices for each cluster. Let represent the locally optimized global transformation matrix of a cluster, . Defining the distance function of a cluster with respect to two different transformations as


the mutual coherency error for two clusters’ transformation is approximated by the following equation:


The final coherency error for a given cluster is defined by


where and represent the first and second minimum coherency error-valued clusters with respect to (Fig. (a)a). The outlying cluster is defined as


The schematic view of coherency evaluation is presented in Fig. 2. Subsequently, we remove the worst coherently transformed cluster. The iteration continues until the optimal cluster set contains three clusters. The detailed procedures are illustrated in Algorithm 1. The final three optimally chosen clusters, , again iterate over the fine optimization process. In the fine registration stage, we integrate the three optimal clusters and iterate over the optimization process similar to (4):

(a) Regression on point (dot) and line (arrow) pairs for x-axis projected MIP images (i.e., ). There are two pairs for the lower (red) and the upper (blue) jaw.
(b) Regression on point and line for depth projected images of the lower jaw (i.e., ).
(c) Regression on point and line for depth projected images of the upper jaw (i.e., ).
(d) Initial registration result. The two models (b, c) are matched to the CT image via point and line pairs.
Fig. 3: Point and line regression results in (a) MIP and (b, c) depth projected images. The corresponding initial registration results are shown in (d).

Iii Experiments

In this section, we present a clear visualization of deep-pose regressions and initial registration results together with the optimal clusters. Accuracy and time complexity were the primary criteria for performance evaluation. Accuracy is evaluated with the ground-truth registration result obtained by the clinical experts in the field.

We acquired data from 145 subjects, each data sample including both a CT image and the scanned models of the subject’s upper/lower jaw (i.e., 290 pairs for possible registration). The images are sourced from four different multi-centers. The model is acquired by optically scanning the surface from the cast of the patients. Each optically scanned surface formed a triangular mesh structure with thousand triangles and thousand vertices. In the CT dataset, the thickness of the slices ranged from 0.2 to 2.0mm and pixel sizes ranged from 0.2 to 1.0mm. We used 100 subjects for training, and 45 subjects for testing.

Input Point [mm] Line [°]


Methods Mean [mm] Duration [s]
Three-points N/A
+ ICP [16]
+ Sparse ICP [18]
+ Go-ICP [49]
Ours (only initial)
(w/o stochastic)

Iii-a Deep-Pose Regressions

In this section, we visualize the point and line regressions and the corresponding initial registration results. Fig. 3 shows each result along with its corresponding and . The initial registration results using points and lines are clearly shown in Fig. (d)d. For quantitative analysis, we used 8-fold cross validation metric for 100 subjects. Table I shows the Euclidean distance errors for each point and line. The result shows that there is no significant variation in the inference (i.e., testing). Even if the initial deep-pose regression resulted in a slight misalignment, the following optimal cluster-based fine registration step can complement the transformation biases.

(a) Initial cluster candidates.
(b) Locally optimized clusters.
(c) The final three optimal clusters.
Fig. 4: Visualization of clusters. (a) shows the initial clusters automatically generated by a given radius mm. (b) shows the locally optimized clusters and (c) shows the final three clusters (i.e., the output of Algorithm 1). The three clusters were selected by avoiding the metal artifacts. Best viewed in color.
() Three-points.
() +ICP.
() + Sparse ICP.
() + Go-ICP.
() Our method.
Fig. 10: Visualization of the five methods. Each row represents different cases. (a) Three-points registration results. (b), (c), and (d) are the results of ICP [16], sparse ICP [18], and Go-ICP [49] methods, respectively, from the initial state (a). (e) Our proposed method with neither initial conditions nor manual points. The colors of the surfaces represent the distance between the corresponding result and the ground-truth. Explicit iso-surfaces are visualized together with the models for visual purposes. The three methods (b), (c), and (d) used the visualized iso-surfaces in their algorithms. Best viewed in color.

Iii-B Optimal Clusters

The cluster-based fine registration algorithm is visualized in Fig. 4. As shown in Algorithm 1, crown-prioritized clusters are generated for local optimization, and the three optimal clusters that have the most coherent local transformations are selected (Fig. (c)c). The final registration procedure harnesses the three remaining optimal clusters as demonstrated in (9). The metal artifact regions whose crown boundaries are difficult to delineate are automatically neglected by the non-coherent transformation (Fig. 4). Consequently, the clusters that have relatively clear boundaries in the corresponding CT image were used for the final registration. We used mm for all the experiments. The average number of initial clusters was 17, resulting in 68 clusters in total, including the stochastic clusters.

Iii-C Performance Evaluation

The evaluation results were compared with the ground-truth registration results provided by the experts in the field. In assessing the accuracy of the results, we used the distance between landmarks on surface models, which were also marked by the experts. Table II shows the mean Euclidean distance errors of landmarks and the elapsed time for performing registration on 45 test subjects (i.e., 90 registration results in total). Distance errors were calculated based on the ten landmark points:


where is a landmark point on the surface model and is the optimal transformation. We compared our proposed algorithm with three-points manual registration followed by ICP-based methods [16, 17, 49, 50, 18]. The methods are state-of-the-art approaches that are frequently used in industry and many types of research. To obtain the target points from a CT image, we first extracted iso-surface of the crown with the threshold value . It is well known that the bone structure can be segmented by Hounsfield unit in CT images [51]. The three-points method is used as a base metric, and the basic ICP [16], sparse ICP [18], and Go-ICP [49] methods are used in the subsequent fine registration. We also experimented further by eliminating the stochastic clusters from our method.

The result indicates that our proposed method achieved the highest accuracy among all the state-of-the-art methods. The ICP-based methods showed poor accuracy compared to the three-points manual registration that was carefully performed by the experts. The degradation of the accuracy from ICP-based methods is reasonable because the iso-surfaces extracted from CT images were severely affected by noise (Fig. 10). Our method, on the other hand, successfully achieved the highest accuracy compared to other methods. The aforementioned experimental variant that skips the additional generation of clusters (i.e., proceeds without stochastic addition of clusters) also outperformed the ICP-based methods. Since the randomly added clusters have a higher chance of avoiding the metal artifact regions, the stochastic method improved the registration accuracy of our method. Stochastic rotations also aid the clusters to better match with the model in the initial state. The analysis of the local regions (i.e., local optimization and coherency analysis) makes our method relatively slower than the basic ICP method. However, unlike other methods, the complete automatization of the procedure and superior accuracy may compensate for the time complexity in dental applications.

Some of the samples are visualized in Fig. 10. Fig. 10a shows manual registration with three-points. The first row in Fig. 10 clearly demonstrates that all the methods accurately perform matching when there is no metal artifact in the CT image. In the case of the crown regions with metal artifact, the ICP-based methods fail to precisely match the regions even when they are provided with the three-points as a prior (the second and third rows in Fig. 10bd). The critical drawback of the ICP-based methods is that noisy region (i.e., metal artifact) strongly affects the target (i.e., iso-surface from CT image), and there is no explicit method to remove the metal noise in the iso-surface. Our proposed method (Fig. 10e), on the other hand, robustly matched the scanned models to CT images in every condition. The main factor of superior accuracy is the selective use of regions unaffected by noise and exclusion of noisy regions from the scope of consideration in the subsequent process (Fig. 4).

Iii-D Parameter Study

In this section, we study the radius parameter of a cluster. The size of the cluster affects the accuracy, run-time, and the number of clusters. Fig. 11 shows the relationship between the radius, accuracy, and run-time. The optimum radius, which attained the highest accuracy, was 10mm (Fig. 11). The accuracy decreased steeply with bigger radius. It demonstrates that bigger clusters have a higher probability of containing noisy regions. The noise may either be metal artifacts or irrelevant portions that do not contain any corresponding features to the ones that exist in a CT scan. The duration time monotonically decreases as the radius increases from the optimum while losing the accuracy. It is obvious that a bigger radius leads to a decrease in the number of clusters and vice versa. The most time-consuming procedure of our proposed algorithm is the optimal cluster-based matching (i.e., local optimization). Local optimization with many small clusters is computationally costly (Fig. 11). In addition, iterative coherency analysis of transformation (7) is another cause of performance bottlenecks.

The cluster-based registration can be viewed as a special case of the ICP method. We iteratively removed outlying points (i.e., cluster) and performed correspondence matching via optimal vector alignments. We implicitly applied noise removal procedures by removing the clusters. The radius parameter does affect the performance of the proposed method. The results showed stable accuracy when using 10mm radius. The stability of the algorithm is mainly achieved by taking many cluster candidates into account (i.e., coherency evaluation in (7)).

Fig. 11: The distance and run-time evaluation with respect to the radius of clusters. The mean Euclidean distance errors of landmarks are used for distance calculations.

Iv Discussion and Conclusion

Dental cone-beam CT image analysis is a challenging problem because of the large variety of cases and the presence of metal artifacts. To overcome the taxing manual labor of the registration between CT images and optically scanned models, we propose a fully automated CNN-based deep-pose estimations followed by cluster-based matching. Our work suggests the applicability of a neural network in simplifying the manual tasks involved in registration. Applying pose regressions with neural networks allowed the use of traditional methods in the following steps. As a result, we could avoid the pitfall of handling the entire registration task with complex neural networks. In fact, it is not feasible to solve the 3D registration problem directly with a neural network. The main limitations arise from the difficulties of generating the ground-truth data and unclear similarity criteria between images of different modality. As for fine registration, optimal cluster-based similarity matching was performed to accomplish confident region matching. Our method reduced the distance error by 30.77% to 70% when compared to other state-of-the-art registration methods. The primary factor for such improvement was the optimal local clusters that helped rule out the regions affected by metal artifacts.

The proposed algorithm is applicable to full-arch scanned models in a clinical practice. The performance of the proposed initial registration by deep regression networks shows results that are stable enough to be used for further fine registration process. Although working with full-arch scanned model is a common practice, the occasional use of partially scanned models may be problematic for the algorithm. This is because we employed the PCA method to obtain the axis for projection in the scanned model.


  • [1] F. Alam, S. U. Rahman, S. Ullah, and K. Gulati, “Medical image registration in image guided surgery: Issues, challenges and research opportunities,” Biocybernetics and Biomedical Engineering, 2017.
  • [2] O. Haas Jr, O. Becker, and R. de Oliveira, “Computer-aided planning in orthognathic surgery—systematic review,” International journal of oral and maxillofacial surgery, vol. 44, no. 3, pp. 329–342, 2015.
  • [3] L. H. Cevidanes, L. Bailey, G. Tucker Jr, M. Styner, A. Mol, C. Phillips, W. Proffit, and T. Turvey, “Superimposition of 3d cone-beam ct models of orthognathic surgery patients,” Dentomaxillofacial Radiology, vol. 34, no. 6, pp. 369–375, 2005.
  • [4] K. Stokbro, E. Aagaard, P. Torkov, R. Bell, and T. Thygesen, “Surgical accuracy of three-dimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation,” International journal of oral and maxillofacial surgery, vol. 45, no. 1, pp. 8–18, 2016.
  • [5] F. Z. Jamjoom, D.-G. Kim, E. A. McGlumphy, D. J. Lee, and B. Yilmaz, “Positional accuracy of a prosthetic treatment plan incorporated into a cone-beam computed tomography scan using surface scan registration,” The Journal of prosthetic dentistry, 2018.
  • [6] T. Flügge, W. Derksen, J. Te Poel, B. Hassan, K. Nelson, and D. Wismeijer, “Registration of cone beam computed tomography data and intraoral surface scans–a prerequisite for guided implant surgery with cad/cam drilling guides,” Clinical oral implants research, vol. 28, no. 9, pp. 1113–1118, 2017.
  • [7] G. Eggers, J. Mühling, and R. Marmulla, “Image-to-patient registration techniques in head surgery,” International journal of oral and maxillofacial surgery, vol. 35, no. 12, pp. 1081–1095, 2006.
  • [8] J. M. Plooij, T. J. Maal, P. Haers, W. A. Borstlap, A. M. Kuijpers-Jagtman, and S. J. Bergé, “Digital three-dimensional image fusion processes for planning and evaluating orthodontics and orthognathic surgery. a systematic review,” International journal of oral and maxillofacial surgery, vol. 40, no. 4, pp. 341–352, 2011.
  • [9] J. Gateno, J. J. Xia, J. F. Teichgraeber, A. M. Christensen, J. J. Lemoine, M. A. Liebschner, M. J. Gliddon, and M. E. Briggs, “Clinical feasibility of computer-aided surgical simulation (cass) in the treatment of complex cranio-maxillofacial deformities,” Journal of Oral and Maxillofacial Surgery, vol. 65, no. 4, pp. 728–734, 2007.
  • [10] S. A.-H. Centenero and F. Hernández-Alfaro, “3d planning in orthognathic surgery: Cad/cam surgical splints and prediction of the soft and hard tissues results–our experience in 16 cases,” Journal of Cranio-Maxillofacial Surgery, vol. 40, no. 2, pp. 162–168, 2012.
  • [11] F. Ritto, A. Schmitt, T. Pimentel, J. Canellas, and P. Medeiros, “Comparison of the accuracy of maxillary position between conventional model surgery and virtual surgical planning,” International journal of oral and maxillofacial surgery, vol. 47, no. 2, pp. 160–166, 2018.
  • [12] F. A. Rangel, T. J. Maal, M. J. de Koning, E. M. Bronkhorst, S. J. Bergé, and A. M. Kuijpers-Jagtman, “Integration of digital dental casts in cone beam computed tomography scans—a clinical validation study,” Clinical oral investigations, vol. 22, no. 3, pp. 1215–1222, 2018.
  • [13] H. Popat and S. Richmond, “New developments in: three-dimensional planning for orthognathic surgery,” Journal of orthodontics, vol. 37, no. 1, pp. 62–71, 2010.
  • [14] K. Becker, B. Wilmes, C. Grandjean, and D. Drescher, “Impact of manual control point selection accuracy on automated surface matching of digital dental models,” Clinical oral investigations, vol. 22, no. 2, pp. 801–810, 2018.
  • [15] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image and vision computing, vol. 21, no. 11, pp. 977–1000, 2003.
  • [16] P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611.   International Society for Optics and Photonics, 1992, pp. 586–607.
  • [17] S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on.   IEEE, 2001, pp. 145–152.
  • [18] S. Bouaziz, A. Tagliasacchi, and M. Pauly, “Sparse iterative closest point,” in Proceedings of the Eleventh Eurographics/ACMSIGGRAPH Symposium on Geometry Processing.   Eurographics Association, 2013, pp. 113–123.
  • [19] J. Ahn, A. Park, J. Kim, B. Lee, and J. Eom, “Development of three-dimensional dental scanning apparatus using structured illumination,” Sensors, vol. 17, no. 7, p. 1634, 2017.
  • [20] A. J. Herline, J. L. Herring, J. D. Stefansic, W. C. Chapman, R. L. Galloway Jr, and B. M. Dawant, “Surface registration for use in interactive, image-guided liver surgery,” Computer Aided Surgery: Official Journal of the International Society for Computer Aided Surgery (ISCAS), vol. 5, no. 1, pp. 11–17, 2000.
  • [21] G. Jin, S.-J. Lee, J. K. Hahn, S. Bielamowicz, R. Mittal, and R. Walsh, “3d surface reconstruction and registration for image guided medialization laryngoplasty,” in International Symposium on Visual Computing.   Springer, 2006, pp. 761–770.
  • [22] N. Bolandzadeh, W. Bischof, C. Flores-Mir, and P. Boulanger, “Multimodal registration of three-dimensional maxillodental cone beam ct and photogrammetry data over time,” Dentomaxillofacial Radiology, vol. 42, no. 2, p. 22027087, 2013.
  • [23] H.-H. Lin, W.-C. Chiang, L.-J. Lo, and C.-H. Wang, “A new method for the integration of digital dental models and cone-beam computed tomography images,” in Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE.   IEEE, 2013, pp. 2328–2331.
  • [24] Y. Fan, D. Jiang, M. Wang, and Z. Song, “A new markerless patient-to-image registration method using a portable 3d scanner,” Medical physics, vol. 41, no. 10, 2014.
  • [25] K. Jung, S. Jung, I. Hwang, T. Kim, and M. Chang, “Registration of dental tomographic volume data and scan surface data using dynamic segmentation,” Applied Sciences, vol. 8, no. 10, p. 1762, 2018.
  • [26] J. Gateno, J. Xia, J. F. Teichgraeber, and A. Rosen, “A new technique for the creation of a computerized composite skull model,” Journal of oral and maxillofacial surgery, vol. 61, no. 2, pp. 222–227, 2003.
  • [27] M. Tsuji, N. Noguchi, M. Shigematsu, Y. Yamashita, K. Ihara, M. Shikimori, and M. Goto, “A new navigation system based on cephalograms and dental casts for oral and maxillofacial surgery,” International journal of oral and maxillofacial surgery, vol. 35, no. 9, pp. 828–836, 2006.
  • [28] W.-M. Yang, C.-T. Ho, and L.-J. Lo, “Automatic superimposition of palatal fiducial markers for accurate integration of digital dental model and cone beam computed tomography,” Journal of Oral and Maxillofacial Surgery, vol. 73, no. 8, pp. 1616–e1, 2015.
  • [29] F. A. Rangel, T. J. Maal, S. J. Bergé, and A. M. Kuijpers-Jagtman, “Integration of digital dental casts in cone-beam computed tomography scans,” ISRN dentistry, vol. 2012, 2012.
  • [30] L. Fieten, K. Schmieder, M. Engelhardt, L. Pasalic, K. Radermacher, and S. Heger, “Fast and accurate registration of cranial ct images with a-mode ultrasound,” International journal of computer assisted radiology and surgery, vol. 4, no. 3, p. 225, 2009.
  • [31] O. de Waard, F. Baan, L. Verhamme, H. Breuning, A. M. Kuijpers-Jagtman, and T. Maal, “A novel method for fusion of intra-oral scans and cone-beam computed tomography scans for orthognathic surgery planning,” Journal of Cranio-Maxillofacial Surgery, vol. 44, no. 2, pp. 160–166, 2016.
  • [32] L. Sun, H.-S. Hwang, and K.-M. Lee, “Registration area and accuracy when integrating laser-scanned and maxillofacial cone-beam computed tomography images,” American Journal of Orthodontics and Dentofacial Orthopedics, vol. 153, no. 3, pp. 355–361, 2018.
  • [33] J. Jiang, P. Trundle, and J. Ren, “Medical image analysis with artificial neural networks,” Computerized Medical Imaging and Graphics, vol. 34, no. 8, pp. 617–631, 2010.
  • [34]

    G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,”

    Medical image analysis, vol. 42, pp. 60–88, 2017.
  • [35] X. Yang, R. Kwitt, and M. Niethammer, “Fast predictive image registration,” in Deep Learning and Data Labeling for Medical Applications.   Springer, 2016, pp. 48–57.
  • [36] M. Simonovsky, B. Gutiérrez-Becker, D. Mateus, N. Navab, and N. Komodakis, “A deep metric for multimodal registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2016, pp. 10–18.
  • [37] S. Miao, Z. J. Wang, and R. Liao, “A cnn regression approach for real-time 2d/3d registration,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1352–1363, 2016.
  • [38] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage, vol. 158, pp. 378–396, 2017.
  • [39] Y. Hu, M. Modat, E. Gibson, W. Li, N. Ghavami, E. Bonmati, G. Wang, S. Bandula, C. M. Moore, M. Emberton et al., “Weakly-supervised convolutional neural networks for multimodal image registration,” Medical image analysis, vol. 49, pp. 1–13, 2018.
  • [40] J.-J. Hwang, Y.-H. Jung, B.-H. Cho, and M.-S. Heo, “An overview of deep learning in the field of dentistry,” in Imaging science in dentistry, 2019.
  • [41] R. A. Horn, R. A. Horn, and C. R. Johnson, Matrix analysis.   Cambridge university press, 1990.
  • [42] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37–52, 1987.
  • [43] G. H. Golub and C. Reinsch, “Singular value decomposition and least squares solutions,” Numerische mathematik, vol. 14, no. 5, pp. 403–420, 1970.
  • [44] G. Strang, G. Strang, G. Strang, and G. Strang, Introduction to linear algebra.   Wellesley-Cambridge Press Wellesley, MA, 1993, vol. 3.
  • [45] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [46] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    , 2010, pp. 249–256.
  • [47] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [48] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder–mead simplex method in low dimensions,” SIAM Journal on optimization, vol. 9, no. 1, pp. 112–147, 1998.
  • [49] J. Yang, H. Li, D. Campbell, and Y. Jia, “Go-icp: A globally optimal solution to 3d icp point-set registration,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 11, pp. 2241–2254, 2016.
  • [50] D. Aiger, N. J. Mitra, and D. Cohen-Or, “4-points congruent sets for robust pairwise surface registration,” in ACM Transactions on Graphics (TOG), vol. 27, no. 3.   ACM, 2008, p. 85.
  • [51] C. E. Misch, Contemporary implant dentistry-E-Book.   Elsevier Health Sciences, 2007.