Accurate registration and tracking are essential for any surgical navigation system. Recently, HoloLens has been shown to be systematically superior to comparative devices for surgical applications [3, 7]. Computer vision approaches have been used to improve anatomic 3D model tracking and registration, but most use custom image-based markers, such as QR codes or patterns, that are unable to be used in sterile environments or during procedures. Ideally, markers need to be unique and able to be easily detected by computer vision as well as easily detected on medical imaging. Visimarkers (Clear Guide Medical, Baltimore, MD) are sterile, radiopaque markers with AprilTags that are used for image fusion and registration. To our knowledge, we do not know of any other available image-based markers that are radiopaque and easily detectable on CT scans as well as being sterile.
In this paper, we propose a workflow to use Visimarkers to perform automatic registration for AR-assisted surgical navigation. We developed a method to segment out optical markers on CT scans, and a fast and accurate algorithm for marker detection and registration. Compared to common registration methods, such as the Iterative Closest Point (ICP) algorithm [5, 1] and the Vuforia library (Vuforia), our algorithm can offer either faster registration speed or higher registration accuracy, while capable of performing registration with sparse and noisy point correspondences.
2.1 CT Segmentation
VisiMarkers were placed on a subject’s skin prior to a routine CT exam. The CT scan was then anonymized and exported as a DICOM image set. The Image set was then imported into GNU Octave  where segmentation and analysis was performed. The segmentation procedure is depicted in Figure 1. To produce a 3D object for display with the Hololens, the DICOM image set was imported into the 3D viewer plugin of FIJI  from which it was exported as an STL file followed by a final conversion into an OBJ file using Blender.
2.2 Hololens Calibration
The calibration process for the Hololens device involves two steps. First, following a standard camera calibration procedure developed for a pinhole camera 
, the intrinsic parameters of the Hololens are estimated using a standard ChARUCO board[8, 4]. Second, to correct the misalignment between the hologram and the corresponding marker, the position shift of the hologram is measured, modelled, and corrected.
One of the main challenges of marker registration in a surgical setting is the limited number of available point correspondences. In such scenario, ICP and Vuforia become highly sensitive to noises in the position measurement. We developed an algorithm that is robust under sparse point correspondences while achieving high accuracy and registration speed. In the following experiments, we test the most extreme condition where there is only three detected tags on the CT scan. Our registration algorithm is based on aligning triangles formed by the segmented markers on the CT scan and those formed by the detected markers on the patient’s body. This method keeps track of all the detected triangles, finds the one that is most similar to the triangle on the CT scan, and align the corresponding triangles together. Each triangle is encoded in a -d tree  sorted by the ratio of the three edges of the triangle, where is the longest edge and is the shortest edge. Figure 3 illustrates the main steps of the algorithm.
3.1 Registration Speed
We compared our registration algorithm with two other popular methods, Vuforia and ICP. The registration times for our algorithm and the ICP is evaluated using the CPU utilization rate in the Windows Device Portal connected to the Hololens. First, we measured the CPU utilization rate during idle app runtime for one minute, and found that the average CPU utilization rate is about . Second, we ran the registration algorithm, and recorded the elpased time during which the CPU utilization rate stays above the average rate measured previously. The rest of the registration speed data, including the registration speed of Vuforia, is referenced from the paper by Park et. al. 
3.2 Registration Accuracy
The registration accuracy is measured by tracing the center of the hologram of a reference image overlaying a real copy of that image, and measuring the difference between the center of the two images, as shown in Figure 5. The registration accuracy of Vuforia is measured via the same method but with a different marker, namely the one employed by Park et. al. , because the reference image used by our algorithm contains too little feature for Vuforia to track. The left subplot of Figure 6 shows that, for ICP and our algorithm, the registration accuracy depends on the number of detected markers. Note that the registration accuracy of our algorithm decreases from cm to cm as the number of markers increases from 4 to 10. The right subplot of Figure 6 compares the performance of Vuforia and our algorithm under distance variations. In order to evaluate the effect of distance on registration accuracy, we placed the markers at six different distances from the Hololens, and measured the registration accuracy using both Vuforia and our algorithm. The data suggest that Vuforia has a shorter detection range than our algorithm, and its performance becomes better as the distance decreases. On the other hand, our algorithm’s registration accuracy reaches a local minimum at distance of 51.5 cm. This distance is approximately one arm’s length away from the tag, allowing a natural pose for the surgeon in an operating room.
4 New or Breakthrough Work
Our goal is to implement a complete pipeline to align a patient’s CT scan with the patient in real-time. In this report, we describe specifically the segment of our pipeline for registering cross sectional imaging data back onto a patient. The key insight here is the utilization of radio-opaque markers. By placing these markers on the patient before a scan, fiduciary markers become embedded directly into the image data, providing us with two sets of points to be aligned. Using these point clouds, we can now produce accurate registration rapidly since we can eliminate the time consuming step of manual refinement of the registration that was previously required to achieve our designated level of accuracy.
We developed a method that can accurately use radio-opaque optical marker to perform CT scan registration, and evaluated its registration speed and accuracy compared to common alternatives such as Vuforia and ICP algorithms. This method can be applied in a real-world surgical field, potentially allowing physicians to “see” inside a patient to significantly improve procedure time.
-  (1975) Multidimensional binary search trees used for associative searching. Communications of the ACM 18 (9), pp. 509–517. Cited by: §1, §2.3.
-  (2017) GNU Octave version 4.2.1 manual: a high-level interactive language for numerical computations. External Links: Cited by: §2.1.
-  (2018) Augmenting microsoft’s hololens with vuforia tracking for neuronavigation. Healthcare technology letters 5 (5), pp. 221–225. Cited by: §1.
Generation of fiducial marker dictionaries using mixed integer linear programming. Pattern Recognition 51, pp. . External Links: Cited by: §2.2.
-  (1988) Closed-form solution of absolute orientation using orthonormal matrices. JOSA A 5 (7), pp. 1127–1135. Cited by: §1.
-  (2019) Registration methods to enable augmented reality-assisted 3d image-guided interventions. In 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Vol. 11072, pp. 110721A. Cited by: §3.1, §3.2.
-  (2017) Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2d-display. International journal of computer assisted radiology and surgery 12 (6), pp. 901–910. Cited by: §1.
-  (2018-06) Speeded up detection of squared fiducial markers. Image and Vision Computing 76, pp. . External Links: Cited by: §2.2.
-  (2012) Fiji: an open-source platform for biological-image analysis. Nature methods 9 (7), pp. 676. Cited by: §2.1.
-  (2000-12) A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on 22, pp. 1330 – 1334. External Links: Cited by: §2.2.