I Introduction
Computerized registration of medical images is a challenging problem for many clinical applications (e.g., followup studies, surgical planning, and augmented reality [1, 2, 3, 4]). In dental application, registration between maxillofacial conebeam computed tomography (CT) images and a scanned model is an essential prerequisite in surgical planning for dental implants or orthognathic surgery [5, 6, 7, 8, 9, 10, 11, 12, 6, 3, 13]. Rigid transformation, which has a relatively small number of parameters, can be a viable method in the application. However, the registration problem between two different imaging protocols (i.e., optically scanned surface model and CT image) is a challenging task [6]. Moreover, the metal artifacts in conebeam CT images hinder the accuracy of the registration [14]. Manual registration or initial registration between CT and scanned dental model is a timeconsuming task. Most of the clinical applications require the three points for the initial registration. The term “threepoints” refers to the registration using manual three points depiction. Thus, even the stateoftheart clinical applications relying on the threepoints step cannot be fully automated.
Many methods have been proposed to address the rigid registration problem in medical imaging applications. Intensitybased methods attempt to attain optimal registration parameters (i.e., transformation matrix) by intensity similarity criteria [15]. On the other hand, featurebased methods attempt to extract useful features (e.g., spatial points or descriptors) to match correspondences [15]. Iterative closest points (ICP) method [16, 17, 18] is widely used to perform pointtopoint registrations. In dental field, the ICP method is an effective approach that can be applied to many applications such as realtime registration of optical scanning [19]. However, owing to the different protocols of the target and moving images, the ICP method is not directly applicable to the CTmodel registration task. Some studies have been proposed to extract corresponding surface points from CT to apply ICP method with and without features [20, 21, 22, 23, 24, 6, 25]. Markers (i.e., manual landmarks) have also been used to perform registration between CT and optically scanned dental model [26, 27, 28, 29, 30, 31, 14]. It is also clearly known that registration performance can be increased by confident region priors [32]. The major limitations of the previously proposed methods are twofold: 1) initial registration is either not automated or lacks robustness and 2) the performance of matching is seriously affected by the extent of metal artifacts. The performance cannot guarantee robustness in the majority of cases due to the presence of metal artifacts.
In recent years, many studies have used deep neural network to resolve automatic registration problems in the field [33, 34, 35, 36, 37, 38, 39]
. Some proposed unsupervised learning framework with certain similarity criteria (e.g., intensity and landmarks) and others proposed supervised
[35, 36] or weakly supervised neural network [39] to perform registration. However, most of the previous works rely on intensitybased similarity or suffer from the deficiency of annotated data. The lack of groundtruth data makes it hard to train the neural network in a fully supervised manner. As aforementioned, although deep neural networks have been actively applied to various dental applications [40], the manual depiction of threepoints is still required for registration in the clinics.In this paper, we propose a novel and robust method to perform fully automatic registration between conebeam CT images and the scanned dental model without any fragile priors (e.g., isosurface extractions and ICP). We first extract the alignment cues (i.e., pose) from the given CT and model via deep convolutional neural networks (CNNs) followed by the rough model alignment. Finally, fine matching is performed by optimal clusters that are obtained according to the similarity measurement. The key achievements of our method are: fully automated initial alignment and robust fine registration results independent of the presence of metal artifacts in the CT images.
The remainder of this paper is structured as follows. In Section 2, we will describe our proposed method in detail. Section 3, 4, and 5 illustrate the experimental results, discussion, and conclusion, respectively.
Ii Methodology
Our method consists of two steps: 1) deeppose regression and 2) optimal clusterbased matching (Fig. 1). The first step aligns the model to CT in rough conditions (i.e., initial registration). The second matching procedure performs fine registration via optimal clusters. The details of each method are described in the following sections.
Iia DeepPose Regression
In this step, initial registration between CT and the model is performed. Directly using 3dimensional (3D) information is too complex and noisedependent. Therefore, we reduced the dimension to 2D for a robust initial alignment. Let CT image , where (). Maximum intensity projection (MIP) image with respect to xaxis direction, , is generated from (indicated by yellow arrows in Fig. (a)a). As for the scanned model, a synthetic depth image is generated for the main axis (i.e., fullarch visible axis). The main axis can be easily obtained by principal component analysis (PCA) [41, 42]. In a given triangular mesh model , where and are sets of vertices and edges, let a 3D vector be a positional vector in set . Defining the mean vector by , the covariance matrix of an input data can be defined by
. PCA analysis can be subsequently performed via eigendecomposition or singular value decomposition
[41, 43, 44]:(1) 
where
are the eigenvalues of
(). Depth image, , is then generated by projecting all vertices to a tight bounding plane that has as a normal vector. Images are normalized to the range of [0..1]. Then we use the trained CNN models to acquire corresponding point and line pairs for each image (Fig. (a)a). The point and line pairs are subsequently reconstructed (i.e., positioned) in the 3D domain. Since the projected bounding plane, with respect to the scanned model, is originally defined in a 3D domain, the points and lines in are automatically positioned in the 3dimensional space. In case of , the xcoordinate is set to (i.e., the center point for xaxis) where is the width of an input CT image. Finally, the model is spatially matched to the 3D CT image with 3D points and lines (i.e., overlapping the bounding plane of the model and a corresponding plane in a CT which is sliced by a vector in (Fig. (a)a)). Whether the scanned model is the upper jaw (i.e., maxilla) or the lower jaw (i.e., mandible) is given as a prior.For training, we used scanned models and CT images. We manually annotated (i.e., point and angle of the line) images for and . The overall loss is formulated as follows.
(2) 
where , p, and are the input 2D image, groundtruth 2D point and angle of the line, respectively. represents the weights of the network, and are the network outputs. The network is trained according to the weighting parameters and . For training and inference, we used the traditional VGG16 network [45]
with a slight modification in the final layer to output a 3D tensor (i.e., a 2D point and an angle). Two identical neural networks were used to train each projection image (i.e.,
and ). The only difference is the final output tensor which is 6D for (i.e., a pair of a 2D point and an angle). ‘Xavier’ initialization [46] was used for initializing all the weights of the network. While training the network, we fixed the loss parameter as . We used the Adam optimizer [47]with batch size 64 and set the learning rate to 0.001. We decayed the learning rate by multiplying 0.1 for every 20 epochs. We trained the network for 100 epochs using an Intel i77700K desktop system with 4.2 GHz processor, 32 GB of memory, and Nvidia Titan Xp GPU machine. It took 1h to complete all the training procedures.
IiB Optimal Clusterbased Matching
Cluster is defined by the center vertex and radius , i.e., connected local vertices in constrained by centered at . We use the notation in the remaining text for simplicity. Multiple clusters are automatically generated after initial registration (Fig. (b)b). We limited the minimum distance among the center of clusters to (i.e., allowing overlaps). Since we already know a projection vector of a given mesh model, we prioritized the vertices that are located in the crown region (i.e., lower values in image (Fig. (b)b)). Finally, we additionally added clusters in a stochastic manner. We added three clusters for each cluster by randomly positioning the center distanced less than mm with
random rotations for each axis. The presented stochastic procedure enhances the stability of the results by means of improving the probability of the accurate local matching performance.
Local optimization (i.e., registration) is performed for each cluster (, where is the number of clusters) according to the vector alignmentbased similarity criteria:
(3) 
where is a normal vector at vertex v, is a gradient at , and is a transformation with respect to the parameters, . We used downhill simplex method [48] for local cluster optimization until convergence within six dimensional parameters (i.e., three dimensional parameters for each translation and rotation; ):
(4) 
After performing local optimizations of the clusters, we have scattered transformation matrices for each cluster. Let represent the locally optimized global transformation matrix of a cluster, . Defining the distance function of a cluster with respect to two different transformations as
(5) 
the mutual coherency error for two clusters’ transformation is approximated by the following equation:
(6) 
The final coherency error for a given cluster is defined by
(7) 
where and represent the first and second minimum coherency errorvalued clusters with respect to (Fig. (a)a). The outlying cluster is defined as
(8) 
The schematic view of coherency evaluation is presented in Fig. 2. Subsequently, we remove the worst coherently transformed cluster. The iteration continues until the optimal cluster set contains three clusters. The detailed procedures are illustrated in Algorithm 1. The final three optimally chosen clusters, , again iterate over the fine optimization process. In the fine registration stage, we integrate the three optimal clusters and iterate over the optimization process similar to (4):
(9) 
Iii Experiments
In this section, we present a clear visualization of deeppose regressions and initial registration results together with the optimal clusters. Accuracy and time complexity were the primary criteria for performance evaluation. Accuracy is evaluated with the groundtruth registration result obtained by the clinical experts in the field.
We acquired data from 145 subjects, each data sample including both a CT image and the scanned models of the subject’s upper/lower jaw (i.e., 290 pairs for possible registration). The images are sourced from four different multicenters. The model is acquired by optically scanning the surface from the cast of the patients. Each optically scanned surface formed a triangular mesh structure with thousand triangles and thousand vertices. In the CT dataset, the thickness of the slices ranged from 0.2 to 2.0mm and pixel sizes ranged from 0.2 to 1.0mm. We used 100 subjects for training, and 45 subjects for testing.
Input  Point [mm]  Line [°] 

MEAN AND STANDARD DEVIATION OF EUCLIDEAN DISTANCE ERROR FOR 8FOLD CROSSVALIDATION OF THE POINT AND LINE REGRESSIONS
Methods  Mean [mm]  Duration [s] 

Threepoints  N/A  
Threepoints
+ ICP [16] 

Threepoints
+ Sparse ICP [18] 

Threepoints
+ GoICP [49] 

Ours (only initial)  
Ours  
Ours
(w/o stochastic) 
Iiia DeepPose Regressions
In this section, we visualize the point and line regressions and the corresponding initial registration results. Fig. 3 shows each result along with its corresponding and . The initial registration results using points and lines are clearly shown in Fig. (d)d. For quantitative analysis, we used 8fold cross validation metric for 100 subjects. Table I shows the Euclidean distance errors for each point and line. The result shows that there is no significant variation in the inference (i.e., testing). Even if the initial deeppose regression resulted in a slight misalignment, the following optimal clusterbased fine registration step can complement the transformation biases.
IiiB Optimal Clusters
The clusterbased fine registration algorithm is visualized in Fig. 4. As shown in Algorithm 1, crownprioritized clusters are generated for local optimization, and the three optimal clusters that have the most coherent local transformations are selected (Fig. (c)c). The final registration procedure harnesses the three remaining optimal clusters as demonstrated in (9). The metal artifact regions whose crown boundaries are difficult to delineate are automatically neglected by the noncoherent transformation (Fig. 4). Consequently, the clusters that have relatively clear boundaries in the corresponding CT image were used for the final registration. We used mm for all the experiments. The average number of initial clusters was 17, resulting in 68 clusters in total, including the stochastic clusters.
IiiC Performance Evaluation
The evaluation results were compared with the groundtruth registration results provided by the experts in the field. In assessing the accuracy of the results, we used the distance between landmarks on surface models, which were also marked by the experts. Table II shows the mean Euclidean distance errors of landmarks and the elapsed time for performing registration on 45 test subjects (i.e., 90 registration results in total). Distance errors were calculated based on the ten landmark points:
(10) 
where is a landmark point on the surface model and is the optimal transformation. We compared our proposed algorithm with threepoints manual registration followed by ICPbased methods [16, 17, 49, 50, 18]. The methods are stateoftheart approaches that are frequently used in industry and many types of research. To obtain the target points from a CT image, we first extracted isosurface of the crown with the threshold value . It is well known that the bone structure can be segmented by Hounsfield unit in CT images [51]. The threepoints method is used as a base metric, and the basic ICP [16], sparse ICP [18], and GoICP [49] methods are used in the subsequent fine registration. We also experimented further by eliminating the stochastic clusters from our method.
The result indicates that our proposed method achieved the highest accuracy among all the stateoftheart methods. The ICPbased methods showed poor accuracy compared to the threepoints manual registration that was carefully performed by the experts. The degradation of the accuracy from ICPbased methods is reasonable because the isosurfaces extracted from CT images were severely affected by noise (Fig. 10). Our method, on the other hand, successfully achieved the highest accuracy compared to other methods. The aforementioned experimental variant that skips the additional generation of clusters (i.e., proceeds without stochastic addition of clusters) also outperformed the ICPbased methods. Since the randomly added clusters have a higher chance of avoiding the metal artifact regions, the stochastic method improved the registration accuracy of our method. Stochastic rotations also aid the clusters to better match with the model in the initial state. The analysis of the local regions (i.e., local optimization and coherency analysis) makes our method relatively slower than the basic ICP method. However, unlike other methods, the complete automatization of the procedure and superior accuracy may compensate for the time complexity in dental applications.
Some of the samples are visualized in Fig. 10. Fig. 10a shows manual registration with threepoints. The first row in Fig. 10 clearly demonstrates that all the methods accurately perform matching when there is no metal artifact in the CT image. In the case of the crown regions with metal artifact, the ICPbased methods fail to precisely match the regions even when they are provided with the threepoints as a prior (the second and third rows in Fig. 10bd). The critical drawback of the ICPbased methods is that noisy region (i.e., metal artifact) strongly affects the target (i.e., isosurface from CT image), and there is no explicit method to remove the metal noise in the isosurface. Our proposed method (Fig. 10e), on the other hand, robustly matched the scanned models to CT images in every condition. The main factor of superior accuracy is the selective use of regions unaffected by noise and exclusion of noisy regions from the scope of consideration in the subsequent process (Fig. 4).
IiiD Parameter Study
In this section, we study the radius parameter of a cluster. The size of the cluster affects the accuracy, runtime, and the number of clusters. Fig. 11 shows the relationship between the radius, accuracy, and runtime. The optimum radius, which attained the highest accuracy, was 10mm (Fig. 11). The accuracy decreased steeply with bigger radius. It demonstrates that bigger clusters have a higher probability of containing noisy regions. The noise may either be metal artifacts or irrelevant portions that do not contain any corresponding features to the ones that exist in a CT scan. The duration time monotonically decreases as the radius increases from the optimum while losing the accuracy. It is obvious that a bigger radius leads to a decrease in the number of clusters and vice versa. The most timeconsuming procedure of our proposed algorithm is the optimal clusterbased matching (i.e., local optimization). Local optimization with many small clusters is computationally costly (Fig. 11). In addition, iterative coherency analysis of transformation (7) is another cause of performance bottlenecks.
The clusterbased registration can be viewed as a special case of the ICP method. We iteratively removed outlying points (i.e., cluster) and performed correspondence matching via optimal vector alignments. We implicitly applied noise removal procedures by removing the clusters. The radius parameter does affect the performance of the proposed method. The results showed stable accuracy when using 10mm radius. The stability of the algorithm is mainly achieved by taking many cluster candidates into account (i.e., coherency evaluation in (7)).
Iv Discussion and Conclusion
Dental conebeam CT image analysis is a challenging problem because of the large variety of cases and the presence of metal artifacts. To overcome the taxing manual labor of the registration between CT images and optically scanned models, we propose a fully automated CNNbased deeppose estimations followed by clusterbased matching. Our work suggests the applicability of a neural network in simplifying the manual tasks involved in registration. Applying pose regressions with neural networks allowed the use of traditional methods in the following steps. As a result, we could avoid the pitfall of handling the entire registration task with complex neural networks. In fact, it is not feasible to solve the 3D registration problem directly with a neural network. The main limitations arise from the difficulties of generating the groundtruth data and unclear similarity criteria between images of different modality. As for fine registration, optimal clusterbased similarity matching was performed to accomplish confident region matching. Our method reduced the distance error by 30.77% to 70% when compared to other stateoftheart registration methods. The primary factor for such improvement was the optimal local clusters that helped rule out the regions affected by metal artifacts.
The proposed algorithm is applicable to fullarch scanned models in a clinical practice. The performance of the proposed initial registration by deep regression networks shows results that are stable enough to be used for further fine registration process. Although working with fullarch scanned model is a common practice, the occasional use of partially scanned models may be problematic for the algorithm. This is because we employed the PCA method to obtain the axis for projection in the scanned model.
References
 [1] F. Alam, S. U. Rahman, S. Ullah, and K. Gulati, “Medical image registration in image guided surgery: Issues, challenges and research opportunities,” Biocybernetics and Biomedical Engineering, 2017.
 [2] O. Haas Jr, O. Becker, and R. de Oliveira, “Computeraided planning in orthognathic surgery—systematic review,” International journal of oral and maxillofacial surgery, vol. 44, no. 3, pp. 329–342, 2015.
 [3] L. H. Cevidanes, L. Bailey, G. Tucker Jr, M. Styner, A. Mol, C. Phillips, W. Proffit, and T. Turvey, “Superimposition of 3d conebeam ct models of orthognathic surgery patients,” Dentomaxillofacial Radiology, vol. 34, no. 6, pp. 369–375, 2005.
 [4] K. Stokbro, E. Aagaard, P. Torkov, R. Bell, and T. Thygesen, “Surgical accuracy of threedimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation,” International journal of oral and maxillofacial surgery, vol. 45, no. 1, pp. 8–18, 2016.
 [5] F. Z. Jamjoom, D.G. Kim, E. A. McGlumphy, D. J. Lee, and B. Yilmaz, “Positional accuracy of a prosthetic treatment plan incorporated into a conebeam computed tomography scan using surface scan registration,” The Journal of prosthetic dentistry, 2018.
 [6] T. Flügge, W. Derksen, J. Te Poel, B. Hassan, K. Nelson, and D. Wismeijer, “Registration of cone beam computed tomography data and intraoral surface scans–a prerequisite for guided implant surgery with cad/cam drilling guides,” Clinical oral implants research, vol. 28, no. 9, pp. 1113–1118, 2017.
 [7] G. Eggers, J. Mühling, and R. Marmulla, “Imagetopatient registration techniques in head surgery,” International journal of oral and maxillofacial surgery, vol. 35, no. 12, pp. 1081–1095, 2006.
 [8] J. M. Plooij, T. J. Maal, P. Haers, W. A. Borstlap, A. M. KuijpersJagtman, and S. J. Bergé, “Digital threedimensional image fusion processes for planning and evaluating orthodontics and orthognathic surgery. a systematic review,” International journal of oral and maxillofacial surgery, vol. 40, no. 4, pp. 341–352, 2011.
 [9] J. Gateno, J. J. Xia, J. F. Teichgraeber, A. M. Christensen, J. J. Lemoine, M. A. Liebschner, M. J. Gliddon, and M. E. Briggs, “Clinical feasibility of computeraided surgical simulation (cass) in the treatment of complex craniomaxillofacial deformities,” Journal of Oral and Maxillofacial Surgery, vol. 65, no. 4, pp. 728–734, 2007.
 [10] S. A.H. Centenero and F. HernándezAlfaro, “3d planning in orthognathic surgery: Cad/cam surgical splints and prediction of the soft and hard tissues results–our experience in 16 cases,” Journal of CranioMaxillofacial Surgery, vol. 40, no. 2, pp. 162–168, 2012.
 [11] F. Ritto, A. Schmitt, T. Pimentel, J. Canellas, and P. Medeiros, “Comparison of the accuracy of maxillary position between conventional model surgery and virtual surgical planning,” International journal of oral and maxillofacial surgery, vol. 47, no. 2, pp. 160–166, 2018.
 [12] F. A. Rangel, T. J. Maal, M. J. de Koning, E. M. Bronkhorst, S. J. Bergé, and A. M. KuijpersJagtman, “Integration of digital dental casts in cone beam computed tomography scans—a clinical validation study,” Clinical oral investigations, vol. 22, no. 3, pp. 1215–1222, 2018.
 [13] H. Popat and S. Richmond, “New developments in: threedimensional planning for orthognathic surgery,” Journal of orthodontics, vol. 37, no. 1, pp. 62–71, 2010.
 [14] K. Becker, B. Wilmes, C. Grandjean, and D. Drescher, “Impact of manual control point selection accuracy on automated surface matching of digital dental models,” Clinical oral investigations, vol. 22, no. 2, pp. 801–810, 2018.
 [15] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image and vision computing, vol. 21, no. 11, pp. 977–1000, 2003.
 [16] P. J. Besl and N. D. McKay, “Method for registration of 3d shapes,” in Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611. International Society for Optics and Photonics, 1992, pp. 586–607.
 [17] S. Rusinkiewicz and M. Levoy, “Efficient variants of the icp algorithm,” in 3D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on. IEEE, 2001, pp. 145–152.
 [18] S. Bouaziz, A. Tagliasacchi, and M. Pauly, “Sparse iterative closest point,” in Proceedings of the Eleventh Eurographics/ACMSIGGRAPH Symposium on Geometry Processing. Eurographics Association, 2013, pp. 113–123.
 [19] J. Ahn, A. Park, J. Kim, B. Lee, and J. Eom, “Development of threedimensional dental scanning apparatus using structured illumination,” Sensors, vol. 17, no. 7, p. 1634, 2017.
 [20] A. J. Herline, J. L. Herring, J. D. Stefansic, W. C. Chapman, R. L. Galloway Jr, and B. M. Dawant, “Surface registration for use in interactive, imageguided liver surgery,” Computer Aided Surgery: Official Journal of the International Society for Computer Aided Surgery (ISCAS), vol. 5, no. 1, pp. 11–17, 2000.
 [21] G. Jin, S.J. Lee, J. K. Hahn, S. Bielamowicz, R. Mittal, and R. Walsh, “3d surface reconstruction and registration for image guided medialization laryngoplasty,” in International Symposium on Visual Computing. Springer, 2006, pp. 761–770.
 [22] N. Bolandzadeh, W. Bischof, C. FloresMir, and P. Boulanger, “Multimodal registration of threedimensional maxillodental cone beam ct and photogrammetry data over time,” Dentomaxillofacial Radiology, vol. 42, no. 2, p. 22027087, 2013.
 [23] H.H. Lin, W.C. Chiang, L.J. Lo, and C.H. Wang, “A new method for the integration of digital dental models and conebeam computed tomography images,” in Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE. IEEE, 2013, pp. 2328–2331.
 [24] Y. Fan, D. Jiang, M. Wang, and Z. Song, “A new markerless patienttoimage registration method using a portable 3d scanner,” Medical physics, vol. 41, no. 10, 2014.
 [25] K. Jung, S. Jung, I. Hwang, T. Kim, and M. Chang, “Registration of dental tomographic volume data and scan surface data using dynamic segmentation,” Applied Sciences, vol. 8, no. 10, p. 1762, 2018.
 [26] J. Gateno, J. Xia, J. F. Teichgraeber, and A. Rosen, “A new technique for the creation of a computerized composite skull model,” Journal of oral and maxillofacial surgery, vol. 61, no. 2, pp. 222–227, 2003.
 [27] M. Tsuji, N. Noguchi, M. Shigematsu, Y. Yamashita, K. Ihara, M. Shikimori, and M. Goto, “A new navigation system based on cephalograms and dental casts for oral and maxillofacial surgery,” International journal of oral and maxillofacial surgery, vol. 35, no. 9, pp. 828–836, 2006.
 [28] W.M. Yang, C.T. Ho, and L.J. Lo, “Automatic superimposition of palatal fiducial markers for accurate integration of digital dental model and cone beam computed tomography,” Journal of Oral and Maxillofacial Surgery, vol. 73, no. 8, pp. 1616–e1, 2015.
 [29] F. A. Rangel, T. J. Maal, S. J. Bergé, and A. M. KuijpersJagtman, “Integration of digital dental casts in conebeam computed tomography scans,” ISRN dentistry, vol. 2012, 2012.
 [30] L. Fieten, K. Schmieder, M. Engelhardt, L. Pasalic, K. Radermacher, and S. Heger, “Fast and accurate registration of cranial ct images with amode ultrasound,” International journal of computer assisted radiology and surgery, vol. 4, no. 3, p. 225, 2009.
 [31] O. de Waard, F. Baan, L. Verhamme, H. Breuning, A. M. KuijpersJagtman, and T. Maal, “A novel method for fusion of intraoral scans and conebeam computed tomography scans for orthognathic surgery planning,” Journal of CranioMaxillofacial Surgery, vol. 44, no. 2, pp. 160–166, 2016.
 [32] L. Sun, H.S. Hwang, and K.M. Lee, “Registration area and accuracy when integrating laserscanned and maxillofacial conebeam computed tomography images,” American Journal of Orthodontics and Dentofacial Orthopedics, vol. 153, no. 3, pp. 355–361, 2018.
 [33] J. Jiang, P. Trundle, and J. Ren, “Medical image analysis with artificial neural networks,” Computerized Medical Imaging and Graphics, vol. 34, no. 8, pp. 617–631, 2010.

[34]
G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. Van Der Laak, B. Van Ginneken, and C. I. Sánchez, “A survey on deep learning in medical image analysis,”
Medical image analysis, vol. 42, pp. 60–88, 2017.  [35] X. Yang, R. Kwitt, and M. Niethammer, “Fast predictive image registration,” in Deep Learning and Data Labeling for Medical Applications. Springer, 2016, pp. 48–57.
 [36] M. Simonovsky, B. GutiérrezBecker, D. Mateus, N. Navab, and N. Komodakis, “A deep metric for multimodal registration,” in International Conference on Medical Image Computing and ComputerAssisted Intervention. Springer, 2016, pp. 10–18.
 [37] S. Miao, Z. J. Wang, and R. Liao, “A cnn regression approach for realtime 2d/3d registration,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1352–1363, 2016.
 [38] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, “Quicksilver: Fast predictive image registration–a deep learning approach,” NeuroImage, vol. 158, pp. 378–396, 2017.
 [39] Y. Hu, M. Modat, E. Gibson, W. Li, N. Ghavami, E. Bonmati, G. Wang, S. Bandula, C. M. Moore, M. Emberton et al., “Weaklysupervised convolutional neural networks for multimodal image registration,” Medical image analysis, vol. 49, pp. 1–13, 2018.
 [40] J.J. Hwang, Y.H. Jung, B.H. Cho, and M.S. Heo, “An overview of deep learning in the field of dentistry,” in Imaging science in dentistry, 2019.
 [41] R. A. Horn, R. A. Horn, and C. R. Johnson, Matrix analysis. Cambridge university press, 1990.
 [42] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 13, pp. 37–52, 1987.
 [43] G. H. Golub and C. Reinsch, “Singular value decomposition and least squares solutions,” Numerische mathematik, vol. 14, no. 5, pp. 403–420, 1970.
 [44] G. Strang, G. Strang, G. Strang, and G. Strang, Introduction to linear algebra. WellesleyCambridge Press Wellesley, MA, 1993, vol. 3.
 [45] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.

[46]
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep
feedforward neural networks,” in
Proceedings of the thirteenth international conference on artificial intelligence and statistics
, 2010, pp. 249–256.  [47] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 [48] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder–mead simplex method in low dimensions,” SIAM Journal on optimization, vol. 9, no. 1, pp. 112–147, 1998.
 [49] J. Yang, H. Li, D. Campbell, and Y. Jia, “Goicp: A globally optimal solution to 3d icp pointset registration,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 11, pp. 2241–2254, 2016.
 [50] D. Aiger, N. J. Mitra, and D. CohenOr, “4points congruent sets for robust pairwise surface registration,” in ACM Transactions on Graphics (TOG), vol. 27, no. 3. ACM, 2008, p. 85.
 [51] C. E. Misch, Contemporary implant dentistryEBook. Elsevier Health Sciences, 2007.
Comments
There are no comments yet.