Recent Developments and Future Challenges in Medical Mixed Reality

08/03/2017 ∙ by Long Chen, et al. ∙ Bournemouth University University of Chester 0

Mixed Reality (MR) is of increasing interest within technology-driven modern medicine but is not yet used in everyday practice. This situation is changing rapidly, however, and this paper explores the emergence of MR technology and the importance of its utility within medical applications. A classification of medical MR has been obtained by applying an unbiased text mining method to a database of 1,403 relevant research papers published over the last two decades. The classification results reveal a taxonomy for the development of medical MR research during this period as well as suggesting future trends. We then use the classification to analyse the technology and applications developed in the last five years. Our objective is to aid researchers to focus on the areas where technology advancements in medical MR are most needed, as well as providing medical practitioners with a useful source of reference.



There are no comments yet.


page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Classification of Medical MR

Bibliometric methods are the most common approaches used in identifying research trends by analysing scientific publications[7] [8] [9] [10]. These methods typically make predictions by measuring certain indicators such as geographical distributions of research institutes and the annual growth of publications, as well as citation counts[11]. Usually a manual classification process is carried out  [10], which is inefficient and also can be affected by personal experience. We make use of a generative probabilistic model for text mining – Latent Dirichlet Allocation (LDA)  [12]

to automatically classify and generate the categories, achieving an unbiased review process.

Medical MR is an very broad topic that can be viewed as a multidimensional domain with a crossover of many technologies (e.g. camera tracking, visual displays, computer graphics, robotic vision, and computer vision etc.) and applications (e.g. medical training, rehabilitation, intra-operative navigation, guided surgery). The challenge is to identify the research trends across this complex technological and application domain. Our approach is to analyse the relevant related papers retrieved from different periods, whilst introducing a novel method to automatically decompose the overarching topic (medical mixed reality) into relevant sub-topics that can be analysed separately.

Topic1 Topic 2 Topic 3 Topic 4 Topic 5
“Treatment” “Education” “Rehabilitation” “Surgery” “Training”
term weight term weight term weight term weight term weight
treatment 0.007494 learning 0.01602 physical 0.01383 surgical 0.05450 training 0.03044
clinical 0.007309 development 0.00877 rehabilitation 0.01147 surgery 0.02764 performance 0.01361
primary 0.004333 education 0.00854 environment 0.01112 surgeon 0.01176 laparoscopic 0.01332
qualitative 0.003793 potential 0.00812 game 0.00837 invasive 0.01167 skills 0.01208
focus 0.004165 different 0.00793 therapy 0.00729 minimally 0.01148 simulator 0.01198
Topic6 Topic 7 Topic 8 Topic 9 Topic 10
“Interaction” “Mobile” “Display” “Registration” “Tracking”
term weight term weight term weight term weight term weight
human 0.019901 software 0.01684 visualization 0.03260 registration 0.01417 tracking 0.02418
interaction 0.014849 mobile 0.01556 data 0.03177 segmentation 0.00942 accuracy 0.01584
haptic 0.01439 support 0.00905 display 0.00984 accurate 0.00765 camera 0.01454
feedback 0.013308 online 0.00874 navigation 0.01278 deformation 0.00762 target 0.01347
interface 0.009382 social 0.00835 planning 0.01225 motion 0.00754 registration 0.01186
Table 1: Topic Clustering Results from the LDA Model

1.1 Data Source

The source database used in this analysis is Scopus, which contains the largest abstract and citation database of peer-reviewed literature obtained from more than 5,000 international publishers[13]. Scopus contains articles published after 1995 [14], therefore, encompassing the main period of growth in MR, and helps both in keyword searching and citation analysis.

1.2 Selection Criteria

The regular expression “(mixed OR augmented) reality medic*” is used to retrieve all articles related to augmented reality and mixed reality in medicine, capturing “augmented reality medicine”, “augmented reality medical”, and “mixed reality medicine”, “mixed reality medical”. A total of 1,425 articles were retrieved within the 21 year period from 1995 to 2015, of which 1,403 abstracts were accessed. We initially categorised these articles into seven chronological periods, one for every three years. Abstracts of these articles are then used to generate topics and for trend analysis, as they provide more comprehensive information about an article than its title and keywords alone. The whole selection process is carried out automatically with no manual intervention.

1.3 Text Mining

To identify the key topics discussed in a large number of articles, we employ the Latent Dirichlet Allocation (LDA)[12] method to automatically interpret and cluster words and documents into different topics. This text mining method has been widely used in recommendation systems such as web search engines and advertising applications. LDA is a generative probabilistic model of a corpus. It regards documents () as random mixtures over latent topics (), , where every topic is characterized by a distribution over words (), . The method uses the following formula:



represent the probability of a certain word in a certain document under a certain topic. Word-topic distribution

and topic-document distribution

are randomly selected and then LDA iteratively updates and estimates probabilities until the system convergences. As a result, we identify the relationships amongst documents, topics and words. We input all of the downloaded abstracts into the LDA model and tested a range of parameters to use with it. We empirically derived the value of ten topics as optimal for the best encapsulation of the field.

1.4 Topic Generation

Table 1 summarizes the output showing the ten topics identified with the associated term and weight distributions after convergence. We manually assign one word (shown in quotation marks) that was the best representation of each topic. The general methodology uses the weighting as the primary selection parameter but also takes into account the descriptive keyword for that topic. Topics 1, 5, 9 and 10 just use the keyword with the highest weighting. For Topic 2, although “education” did not have the highest weighting, we consider it to be a more representative term. For Topic 3, “physical” is a sub category of “rehabilitation” and so we use the latter as the more generic term. In Topic 4, “surgical” and “surgery” are fundamentally the same. For Topic 6, “interaction” is the most representative keyword, and the same principle applies to Topics 7 and 8.

Figure 1: Hierarchical taxonomy of Medical MR

Figure 1 represents a hierarchical taxonomy of the results. The overarching “Medical MR“ topic with 1,403 articles has been divided into two main sub-categories: applications and technologies, with 667 and 736 articles respectively. Within applications, the surgical topic has the largest number of articles (167), followed by rehabilitation (137), education (134), treatment (125) and training (104). Within technologies, registration is the most discussed topic (184 articles), followed by tracking (176), interaction (161), displays (148) and mobile technologies (67).

2 Trend Analysis

Each of the ten topics of interest identified by the LDA model has a list of articles associated with them. An article matrix was constructed based on the topic and the attributes for the seven chronological periods being analysed. Figure 2 summarizes the trends identified subdivided into three year periods (1995-97, 1998-2000, 2001-03, 2004-06, 2007-09, 2010-12, and 2013-15). Figure 2(a) plots the total number of publications over the seven periods. The number of publications related to MR in medicine has increased more than 100 times from only 41 publications in 1995-1997 to 440 publications in 2013-2015. In the early 21st century (periods 2001-2003 and 2004-2006), the number of publications of MR in medicine more than doubled from 93 to 191, coinciding with the rapid development of many enabling technologies such as marker-based tracking techniques[15] and advances in Head Mounted Display (HMD) technology  [16] and mobile AR devices  [17].

Figure 2: Trend analysis: (a) Publication Trends. (b) Application Trends. (c) Technology Trends.

Based on the observed growth pattern between 1995 and 2015, a trend line has been produced using a quadratic polynomial with a high coefficient of determination (). Extrapolating the trend line forecasts that in the three year periods (2016 to 2018, and 2019 to 2021), the number of scientific papers on the topic of MR in medicine will be accelerated, reaching around 550 and 700, respectively. The following section looks in more detail at the topic trends and then we analyse the research trends in each area.

2.1 Applications Trends

There are a growing number of medical application areas exploring the use of MR. Fig. 2(b) plots the percentage of articles published for the five most popular application categories: patient treatment, medical and patient education, rehabilitation, surgery, and procedures training:

  • Patient treatment was the most targeted application of MR in the earlier period with almost 20% of published articles. It remains a constant topic of interest with around 10% of articles continuing to investigate this topic. The fall in percentage is mostly due to the parallel rise in interest in the other medical MR categories. Education and rehabilitation topics have both fluctuated but remain close to 10% of articles published.

  • A surge of interest in surgical applications can be observed between 2004 and 2009 when 16% of all articles published on medical MR addressed this topic. However, the comparative dominance of surgical applications has dropped off as activity within other categories has increased.

  • Training applications in medical MR first emerged between 1998-2000. Interest in this topic has grown steadily and is also now at a similar level of interest as the other topics. Together with education, continuation of the current trends suggest that these two topics will be the most popular in the next few years. These are areas where there is no direct involvement with patients and so ethical approval may be easier to gain.

2.2 Technologies Trends

Within the ten topics generated by the LDA model, five key technologies have been identified: interaction, mobile, display, registration and tracking (the percentage of articles that refer to these technologies is plotted in Fig. 2(c)):

  • Real time interaction is a crucial component of any MR application in medicine especially when interactions with patient is involved. The percentage of articles that discuss interaction in the context of medical MR increased steadily from 5% in 1995-1997 to 10% in 2013-2015.

  • The use of mobile technologies is an emerging trend, which has been increased from 0% to 7% of articles across the seven periods. The main surge so far was from 2004-2006, when the advances of micro-electronics technology firstly enabled mobile devices to run fast enough to support AR applications. The use of mobile technologies has been more or less constant from that point onwards. Smartphones and tablets can significantly increase the mobility and user experience as you are not tethered to a computer, or even in one fixed place as was the case with Sutherland’s [18] first AR prototype in the 1960s.

  • Innovations in the use of display technologies was the most discussed MR topic in the early part of the time-line. However, there has been a subsequent dramatic drop in such articles, falling from 33% of articles to only 5%. This may indicate the maturity of the display technologies currently in use. The Microsoft HoloLens and other new devices are expected to disrupt this trend, however.

  • Tracking and registration are important enabling components for MR. They directly impact on the usability and performance of the system. These areas continue to be explored and are yet to be mature enough for complex scenarios, as reflected by the steady percentage (around 10%) of articles on tracking and registration technology from 1995 to 2015.

In the next two sections we summarise the latest research in medical MR using the classification scheme identified above. We restrict our analysis to publications in the last five years, citing recent survey papers wherever possible.

3 Applications

3.1 Treatment

There is no doubt that the use of MR can assist in a variety of patient treatment scenarios. Radiotherapy treatment is one example and Cosentino et al.  [19] have provided a recent review. Most of the studies identified in this specialty area indicate that MR in radiotherapy still needs to address limitations around patient discomfort, ease of use and sensible selection and accuracy of the information to be displayed, although the required accuracy for most advanced radiotherapy techniques is of the same order of magnitude as that which is already being achieved with MR.

Treatment guidance in general can be expected to benefit from MR. One of the only commercial systems currently exploiting AR in a hospital setting is VeinViewer Vision (Christie Medical Holdings, Inc, Memphis, TN), a system to assist with vascular access procedures. It projects near-infrared light onto the skin of a patient, which is absorbed by blood and reflected by surrounding tissue. This information is captured in real time and allows an image of the vascular network to be projected back onto the skin providing an accurate map of patient’s blood vessel pattern - see Figure 3. This makes many needle puncture procedures such as gaining IV access easier and to be performed successfully  [20]. However, another study with skilled nurses found that the success of first attempts with VeinViewer actually worsened  [21], which highlight the need for further clinical validation. The ability to identify targets for needle puncture or insertion of other clinical tools is an active area of development.

Figure 3: The VeinViewer® Vision system projects the patient’s vascular network onto the skin to help with needle insertion. Image courtesy of Christie Medical Holdings.

3.2 Rehabilitation

Patient rehabilitation is a broad term that covers many different types of therapies to assist patients in recovering from a debilitating mental or physical ailment. Many studies report that MR provides added motivation for stroke patients and those recovering from injuries who often get bored by the repetition of the exercises they need to complete (e.g.  [22],  [23],  [24]). Often solutions are based on affordable technologies that can be deployed in patients homes. MR systems can also be used to monitor that the patient is performing exercises correctly (e.g.  [25]).

Neurological rehabilitation is another area where MR is being applied. Caraiman et al.  [26] present a neuromotor rehabilitation system that provides augmented feedback as part of a learning process for the patient, helping the brain to create new neural pathways to adapt.

Rehabilitation is probably the most mature of the medical application areas currently using MR, and will continue to flourish as more home use deployment becomes possible.

3.3 Surgery

Surgical guidance can benefit from MR by providing information from medical scan images to a surgeon in a convenient and intuitive manner. Kerstan-Oertel et al.  [27] provide a useful review of mixed reality image guided surgery and highlight four key issues: the need to choose appropriate information for the augmented visualization; the use of appropriate visualization processing techniques; addressing the user interface; and evaluation/validation of the system. These remain important areas of research. For example, a recent study using 50 experienced otolaryngology surgeons found that those using an AR display were less able to detect unexpected findings (such as a foreign body) than those using the standard endoscopic monitor with a submonitor for augmented information  [28]. Human computer interaction (HCI) therefore remains an unsolved problem for this and many other aspects of medical MR.

Most surgical specialties areas are currently experimenting with the use of MR. One of the most active areas is within neurosurgery and Meola et al.  [29] provide a systematic review. They report that AR is a versatile tool for minimally invasive procedures for a wide range of neurological diseases and can improve on the current generation of neuronavigation systems. However, more prospective randomized studies are needed, such as  [30] and  [31]. There is still a need for further technological development to improve the viability of AR in neurosurgery, and new imaging techniques should be exploited.

Minimally invasive surgical (MIS) procedures is another growth area for the use of MR. Such procedures may be within the abdominal or pelvic cavities (laparoscopy); or the thoracic or chest cavity (thoracoscopy). They are typically performed far from the target location through small incisions (usually 0.5-1.5 cm) elsewhere in the body. The surgeons field of view (FOV) is limited to the endoscope’s camera view and his/her’s depth perception is usually dramatically reduced. Nicolau et al.  [32] discus the principles of AR in the context of laparoscopic surgical oncology and the benefits and limitations for clinical use. Using patient images from medical scanners, AR can improve the visual capabilities of the surgeon and compensate for their otherwise restricted field of view. Two main processes are useful in AR: 3D visualization of the patient data; and registration of the visualization onto the patient. The dynamic tracking of organ motion and deformation has been identified as a limitation of AR in MIS  [33], however. Vemuri et al.  [34] have proposed a deformable 3D model architecture that has been tested in twelve surgical procedures with positive feedback from surgeons. A good overview of techniques that can be used to improve depth perception is also given by Wang et al. [35]. They compared several techniques qualitatively: transparent overlay, virtual window, random-dot mask, ghosting, and depth-aware ghosting; but have not yet obtained quantitative results from a study with subject experts.

Recent uses of MR in surgery have also been reported for: hepatic surgery  [36]  [37]  [38]; eye surgery  [39]; oral and maxillofacial surgery  [40]  [41]; nephrectomy (kidney removal)  [42]; and transpyloric feeding tube placement [43]. It is apparent that all surgical specialties can gain value from the use of MR. However, many of the applications reported are yet to be validated on patients in a clinical setting, which is time consuming task but a vital next step.

Industry adoption of the technology will also be key and is starting to occur. In early 2017, Philips announced their first augmented reality system designed for image-guided spine, cranial and trauma surgery. It uses high-resolution optical cameras mounted on the flat panel X-ray detector of a C-arm to image the surface of the patient. It then combines the external view captured by the cameras and the internal 3D view of the patient acquired by the X-ray system to construct a MR view of the patient’s external and internal anatomy. Clinical cases using this system have been successfully reported [44] [45]. It can be expected that the support of major medical equipment manufactures will greatly accelerate the acceptance and adoption of mixed reality in the operating room.

3.4 Training

The training of medical/surgical procedures is often a complex task requiring high levels of perceptual, cognitive, and sensorimotor skills. Training on patients has been the accepted practice for centuries, but today the use of intelligent mannequins and/or virtual reality simulators have become a widely accepted alternative. MR can also offer added value to the use of these tools in the training curricula and are starting to appear. For example, the PalpSim system  [46] is an augmented virtuality system where the trainee can see their own hands palpating the virtual patient and holding a virtual needle - see Figure 4. This was an important factor for the interventional radiologists involved in the validation of the system as it contributed to the fidelity of the training experience. PalpSim and other examples (e.g.  [47]  [48]  [49]  [50]  [51]) demonstrate that MR can be used in training to improve the accuracy of carrying out a task.

Figure 4: Training simulator for needle puncture guided by palpation. An example of augmented virtuality where the scene of the virtual patient is augmented with the trainees real hands.

3.5 Education

As well as procedures training, medical education will encompass basic knowledge acquisition such as learning human anatomy, the operation of equipment, communications skills with patients, and much more.

Recent reviews of MR and AR in medical education (  [52],  [53]) highlight the traditional reliance on workplace learning to master complex skills. However, safety issues, cost implications and didactics sometimes mean that training in a real-life context is not always possible. Kamphius et al.  [54] discuss the use of AR via three case studies: visualizing human anatomy; visualizing 3D lung dynamics; and training laparoscopy skills  [54]. The latest work published in these areas is summarised and the important research questions that need to be addressed are identified as:

  • To what extent does an AR training system use a representative context, task, and behaviour compared with the real world?

  • What learning effects does the AR training system generate?

  • What factors influence the implementation of an AR training system in a curriculum and how does that affect learning?

  • To what extent do learning results acquired with the AR training system transfer to the professional context?

Currently most of the emphasis on AR in education is not on these questions but on the development, usability and initial implementation of the AR system. This indicates a significant knowledge gap in AR for medical eduction and will become more important as medical schools move away from cadavaeric dissection and embrace digital alternatives.

4 Technologies

All of the applications discussed in the previous section rely on a core set of technologies to enable a MR environment, enhanced with application specific tools and devices.

4.1 Interaction

Perhaps the most natural interaction is to use our own hands to interact with virtual objects in the augmented environment. Computer vision based tracking using markers attached to fingertips has been shown to correspond to virtual fingers with physical properties such as friction, density, surface, volume and collision detection for grasping and lifting interactions  [55]. Wireless instrumented data gloves can also capture finger movements and have been used to perform zoom and rotation tasks, select 3D medical images, and even typing on a floating virtual keyboard in MR  [56]. An alternative is to use infra red (IR) emitters and LEDs to generate optical patterns that can be detected by IR cameras. This type of technology offers touchless interactions suitable for medical applications  [57].

The sense of touch also provides important cues in medical procedures. One advantage of MR over virtual reality applications is that the physical world is still available and can be touched. Any haptic feedback (tactile or force feedback) from virtual objects, however, must make use of specialized hardware  [58]. It may also be useful to augment haptics onto real objects (Haptic Augmented Reality). Examples include simulating the feel of a tumour inside a silicone model [59], and to simulate breast cancer palpation [60].

The deployment of interaction technologies with a clinical setting such as an operating theatre is a particular challenge. The equipment used must not be obtrusive to the procedure being performed, and often has to be sterilised.

4.2 Mobile AR

Article Purpose SDK Device
Andersen et al (2016) [61] Surgical Telementoring OpenCV Project Tango & Samsung Tablet
Rantakari et al (2015) [62] Personal Health Poster Vuforia Samsung Galaxy S5
Kilgus et al (2015) [63] Forensic Pathological Autospy MITK* Apple iPad 2
Soeiro et al (2015) [64] Brain Visulization Metaio Samsung Galaxy S4
Juanes et al (2014) [65] Human Anatomy Education Vuforia Apple iPad
Kramers et al (2014) [66] Neurosurgical Guidance Vuforia HTC Smartphone
mARble® (2014) [67] Dermatology Education Not Mentioned Apple iPhone 4
Mobile RehAppTM (2014) [68] Ankle Sprain Rehabilitation Vuforia Apple iPad
Virag et al (2014) [69] Medical Image Visulization JSARToolKit Any device with browser
Grandi et al (2014) [70] Surgery Planning Vuforia Apple iPad 3
Debarba et al (2012) [71] Hepatectomy Planning ARToolkit Apple iPod Touch
Ubi-REHAB (2011) [72] Stroke Rehabilitation Not Mentioned Android Smartphone
  • Medical Imaging Interaction Toolkit

Table 2: Mobile Medical MR Applications

The rapid development of smartphones and tablets with high quality in-built cameras are providing new opportunities for MR, particularly affordable AR applications. Software Development Kits (SDKs) such as ARToolKit [73] and Vuforia [74] are enabling more and more applications to appear. Mobile AR is expected to play a major role in medical/patient education and rehabilitation applications where accuracy of tracking is not critical. Table 2 provides a summary of mobile medical AR apps currently available. Most use marker-based tracking that rely on the position and focus of the markers and will not work in poor lighting conditions. The display of patient specific data and 3D anatomical models will also be restricted on very small displays. Although some mobile AR applications are used in surgical planning and guidance, these are currently only prototypes built to demonstrate feasibility of using AR, and yet to gain regulatory approval.

4.3 Displays

The HMD has been used in many medical MR applications - see Table 3. A video see-through HMD captures video via a mono- or stereo-camera, and then overlays computer-generated content onto the real world video. The optical see-through HMD allows users to directly see the real world while virtual objects are displayed through a transparent lens. Users will receive light from both the real world and the transparent lens and form a composite view of real and virtual object. The Google Glass and Microsoft HoloLens (see Figure 5) are recent examples of an optical see-through HMDs. Hybrid solutions that use optical see-through technology for display and video technology for object tracking may play a key role in future developments.

Figure 5: A MR brain anatomy lesson using the Microsoft HoloLens (see insert), an optical HMD.
Article Purpose HMD Type HMD Device Tracking
Meng et al (2015) [75] Veins Localization Optical Vuzix STAR 1200XL(modified) Manually Aligned
Chang et al (2015) [76] Remote Surgical Assistance Video VUNIX iWear VR920 Optical Tracker + KLT*
Hsieh et al (2015) [77] Head CT Visulization Video Vuzix Wrap 1200DXAR KLT + ICP**
Wang et al (2015) [78] Screw Placement Navigation Optical NVIS nVisor ST60 Optical Tracker
Vigh et al (2014) [79] Oral Implantology Video NVIS nVisor SX60 Optical Tracker
Hu et al (2013) [80] Surgery Guidance Video eMagin Z800 3D Visor Marker
Abe et al (2013) [81] Percutaneous Vertebroplasty Video HMD by Epson (Model Unknow) Marker
Azimi et al (2012) [82] Navigation in Neurosurgery Optical Goggles by Juxtopia Marker
Blum et al (2012) [83] Anatomy Visulization Video Not Mentioned Gaze-tracker
Toshiaki (2010) [84] Cognitive Disorder Rehabilitation Video Canon GT270 No Tracking
Wieczorek et al (2010) [85] MIS Guidance Video Not Mentioned Optical Marker
Breton et al (2010) [86] Treatment of Cockroach Phobia Video HMD by 5DT (Model Unknow) Marker
Alamri et al (2010) [87] Poststroke-Patient Rehabilitation Video VUNIX iWear VR920 Marker
  • Kanade-Lucas-Tomasi algorithm [88].

  • Iterative Closest Point algorithm [89].

Table 3: HMD-based Medical MR Applications

An alternative solution is to make use of projectors, half-silvered mirrors, or screens to augment information directly onto a physical space without the need to carry or wear any additional display devices.This is referred to as Spatial Augmented Reality (SAR)  [90]. By augmenting information in an open space, SAR enables sharing and collaboration. Video see-through SAR has been used for MIS guidance, augmenting the video output from an endoscope. Less common is optical see-through SAR but examples using semi transparent mirrors for surgical guidance have been built (e.g.  [91]  [40]), and also a magic mirror system for anatomy education  [92]. Another SAR approach is to insert a beam splitter into a surgical microscope to allow users to see the microscope view with an augmented image from a pico-projector  [93]. Finally, direct augmentation SAR will employ a projector or laser transmitter to project images directly onto the physical objects’ surface. For example, the Spatial Augmented Reality on Person (SARP) system  [94] projects anatomical structures directly onto a human subject.

Whatever display technology is used, particularly if it is a monocular display, a problem for MR is to provide an accurate perception of depth for the augmented information. Depth perception can significantly affect surgical performance [95]. In MIS, the problem is further compounded by large changes in luminance and the motion of the endoscope. Stereo endoscopes can be used to address this problem and they are available commercially. 3D depth information can then be recovered using a disparity map obtained from rectified stereo images during surgery [96]. Surgeons have to wear 3D glasses, HMDs, or use binocular viewing interfaces in order to observe the augmented video stream.

4.4 Registration

Once the location for the augmented content has been determined, then this content (often computer-generated graphics) is overlayed or registered into the real word scene. Registration techniques usually involve an optimization step (energy function) to minimize the difference between virtual objects and real objects. For example, using the 3D to 3D Iterative Closest Point (ICP) [89] technique, or other 2D to 3D algorithms [97]. The latter is also effective for registering preoperative 3D data such as CT and MRI images with intra-operative 2D data such as ultrasound, projective X-ray (fluoroscopy), CT-fluoroscopy, as well as optical images. These methods usually involve marker-based (external fiducial markers), feature-based (internal anatomical landmarks) or intensity-based methods that find a geometric transformation that brings the projection of a 3D image into the best possible spatial correspondence with the 2D images by optimizing a registration criterion.

Figure 6: The monocular SLAM system used in MIS [98]. Left: Camera trajectory, 3D map and ellipses in 3D; Right: SLAM AR measurement, Map and ellipses over a sequence frame. Image courtesy of Óscar G. Grasa, Ernesto Bernal, Santiago Casado, Ismael Gil and J. M. M. Montiel.

Registration of virtual anatomical structures within MIS video is a much discussed topic  [99]  [100]  [101]  [32]  [102]. However, due to the problems of a limited field of vision (FOV), organ deformation, occlusion and no marker-based tracking possible, registration in MIS is still an unsolved problem. One possible approach is to use the Simultaneous Localisation And Mapping (SLAM) algorithm  [103] that was originally developed for autonomous robot navigation in an unknown space. AR has a very similar challenge as with robot navigation i.e. both need to get a map of the surrounding environment and locate the current position and pose of cameras  [104]. However, applying SLAM on single hand-held camera (such as an endoscopy camera) is more complicated than with robot navigation as a robot is usually equipped with odometry tools and will move more steadily and slowly than an endoscopy camera  [105]. Grasa et al.  [106]

proposed a monocular SLAM 3D model where they created a sparse abdominal cavity 3D map, and the motion of the endoscope was computed in real-time. This work was later improved to deal with the high outlier rate that occurs in real time and also to reduce computational complexity  

[107], refer to Figure 6. A Motion Compensated SLAM (MC-SLAM) framework for image guided MIS has also been proposed  [108], which predicted not only camera motions, but also employed a high-level model for compensating periodic organ motion. This enabled estimation and compensation of tissue motion even when outside of the camera’s FOV. Based on this work, Mountney and Yang [109] implemented an AR framework for soft tissue surgery that used intra-operative cone beam CT and fluoroscopy as bridging modalities to register pre-operative CT images to stereo laparoscopic images through non-rigid biomechanically driven registration. In this way, manual alignment or fiducial markers were not required and tissue deformation caused by insufflation and respiration during MIS was taken into account. Chen et al.  [110] presented a geometry-aware AR environment in MIS by combining SLAM algorithm with dense stereo reconstruction to provide a global surface mesh for intra-operative AR interactions such as area highlighting and measurement. (see Figure 7).

Figure 7: Geometry information can be acquired by stereo vision, providing geometry-aware AR environment in MIS enables interactive AR application such as intra-operative area highlighting and measurement. Note that the highlighting area (labeled in red) accurately follow alone the curve surface.  [110].

4.5 Tracking

The tracking of real objects in a scene is an essential component of MR. However, occlusions (from instruments, smoke, blood), organ deformations (respiration, heartbeat)  [111] and the lack of texture (smooth regions and reflection of tissues) are all specific challenges for the medical domain.

Optical tracking, magnetic tracking, and the use of planar markers (with patterns or bar codes) have all been used in medical applications of MR. Some optical markers are specially made with iodine and gadolinium elements so that they can display high intensity in both X-ray/CT images and MR images. However, an optical tracking system also requires a free line-of-sight between the optical marker and the camera and this technology is impossible to use for MIS when the camera is inside the patient’s body. Magnetic tracking in medical applications also lacks robustness due to interference caused by diagnostic devices or other ferromagnetic objects. Planar markers have been the most popular approach for tracking in AR to date and have been used successfully in medical application, for example, refer to Figure 8. However, care must be taken as planar markers are also prone to occlusion and have a limited detection range and orientation.

Figure 8: A marker-based AR 3D guidance system for percutaneous vertebroplasty; the augmented red line and yellow-green line indicate the ideal insertion point and needle trajectory. Image courtesy of Yuichiro Abe, Shigenobu Sato, Koji Kato, Takahiko Hyakumachi, Yasushi Yanagibashi, Manabu Ito and Kuniyoshi Abumi [81].

Markerless tracking is an alternative approach that utilizes the real-world scene and employs computer vision algorithms to extract image features as markers. Figure 9 depicts one example from an endoscopic video of a liver. The quality of markerless tracking is dependent on lighting conditions, view angle and image distortion, as well as the robustness of the computer vision algorithm used. The performance of well known feature detection descriptors used in computer vision was evaluated in the context of tracking deformable soft tissue during MIS  [112].The authors present a probabilistic framework to combine multiple descriptors, which could reliably match significantly more features (even in the presence of large tissue deformation) than by using individual descriptors.

Figure 9: The Speed-up Robust Feature (SURF) descriptor applied to endoscopic video of a liver. Image courtesy of Rosalie Planteféve, Igor Peterlik, Nazim Haouchine, Stéphane Cotin [113].

If images are acquired in a set of time steps, it is also possible to use optical flow to compute the camera motion and track feature points [101]. Optical flow is defined as a distribution of apparent velocities of brightness patterns in an image and can be used to track the movement of each pixel based on changes of brightness/light. Some recent work in medical MR  [114]  [115] has been to combine computationally expensive feature tracking with light-weight optical flow tracking to overcome performance issues. If a stereoscopic display is being used then there are techniques developed to generate more robust and precise tracking, such as the real-time visual odometry system using dense quadrifocal tracking  [116]. In general, however, the accurate tracking of tissue surfaces in real time and realistic modelling of soft tissue deformation  [117] remains an active research area.

5 Research Challenges

The feasibility of using MR has been demonstrated by an ever increasing number of research projects. This section summarises the ongoing challenges that are currently preventing MR becoming an accepted tool in every day clinical use.

5.1 Enabling Technologies

Real time performance, high precision and minimum latency are pre-requisites in most medical applications. A typical MR system consists of multiple modules working together (image capture module, image detection and tracking module, content rendering module, image fusion module, display module, etc.) each of which has its own computational demands and each component can contribute to latency. Improvements to hardware and software continue to address these technology challenges. New generation hardware devices such as the HoloLens, Magic Leap, and next generation Google Glass will encourage further investigations and identify new problems. The FOV of the HoloLens is already being considered far too narrow for many applications.

Many clinical procedures could benefit greatly from MR if patient specific data can be accurately delivered to the clinician to help guide the procedure. In such a system, if the augmented content were superimposed in the wrong position then the clinician could be misled and cause a serious medical accident. Many researchers are obtaining accuracy to within a few millimetres and this may be sufficient for some procedures and applications (an anatomy education tool, for example). Other procedures will need sub-millimetre accuracy. Automatic setup and calibration then becomes more critical. It remains challenging to find a balance between speed and accuracy as they are both very important in medical applications of MR.

Accurate patient specific data modelling is also required to provide fully detailed information. Offline high-fidelity image capture and 3D reconstruction can provide some of the patient specific data needed for augmentation, but real-time high-fidelity online model reconstruction is also needed, for example, to handle tissue deformation. There is an urgent need for more research into real time high-fidelity 3D model reconstruction using online modalities such as video from endoscopic cameras. This will reduce the disparity between offline and real-time reconstruction performance capture. One method to address this challenge is to develop methods that target the existing low-resolution real-time tracker methods by adding local information. Developing a large database of 3D patient models is also a step towards bridging the gap. Artificial Intelligence algorithms could be used to learn the relationship between the online and offline image details to reconstruct the 3D information from the endoscopic camera. The goal is to train a model using data acquired by the high-resolution offline modality, which can then be used to predict the 3D model details given by online image capture. Breakthroughs in this area will provide both robustness and flexibility, as the training can be performed offline and then applied to any new patient.

5.2 MR for MIS

Currently a MIS surgeon has to continually look away from the patient to see the video feed on a nearby monitor. They are also hampered by occlusions from instruments, smoke and blood, as well as the lack of depth perception on video from the endoscope camera, and the indirect sense of touch received via the instruments being used.

The smooth surface and reflection of tissues and the lack of texture make it hard to extract valid and robust feature points for tracking. The endoscope light source and its movement, as well as organ deformations caused by respiration and heartbeat, are changing the feature of each key point over time and destroying the feature matching processes. A general model for tracking cannot be used as the shape and texture information of an organ can vary from person to person, also, the limited FOV of endoscopic images restricts the use of model based tracking and registration methods. The only information that could be used for tracking is texture features. However, it is impossible to take a picture of the tracking target (such as a liver) before the operation, as the tracking targets (organs) are usually located inside human body. Some approaches [97] [118] have shown that it is possible to use pre-operative CT/MRI images to perform 3D to 2D registration to intra-operative X-ray fluoroscopy videos. However, these methods need to transfer the 3D CT/MRI image iteratively to find a best transformation, which can cost much precious time. In addition, patients’ abdominal cavities are inflated with carbon dioxide gas to create the pneumoperitoneum, which will deform the original shape of organ and making it difficult for the deformable registration. Nevertheless, these methods demonstrate that matching features of 3D CT/MRI images with endoscopy video may be possible but still under great challenges.

The latest MR systems in many application domains are using SLAM techniques and this approach is the current gold standard. However, real-time SLAM performance needs 3D points from a rigid scene to estimate the camera motion from the image sequences it is producing. Such a rigid scene is not possible in the MIS scenario. Research is therefore needed on how to cope with the tracking and mapping of soft tissue deformations and dealing with sudden camera motions.

Although stereo endoscopes are available, the majority of MIS procedures still use monoscopic devices and depth perception remains a challenge. Recent work by Chen et al.  [119] attempts to tackle this problem by providing depth cues for a monocular endoscope using the SLAM algorithm but more work is required, particularly when large tissue deformations occur.

5.3 Medical HCI

HCI is an important aspect of MR. This encompasses speech and gesture recognition, which can be used to issue interaction commands for controlling the augmented scene. The use of haptics interfaces in MR are also starting to emerge. All of these require a constant highly accurate system calibration, stability and low latency. One approach that explored the visuo-haptic setups in medical training [120] took into account of visual sensory thresholds to allow 100ms latency and one pixel of registration accuracy. This approach was able to minimize time lagged movements resulting from different delays of the data streams. With the new generation of portable AR headsets, in which mobile computing is central to the entire system setup, real-time computation will continue to be a challenge to achieve a seamless Haptics-AR integration.

5.4 Computing Power

The capability of mobile graphics, with a move towards wearable wireless devices, is likely to be indispensable to the future of MR in medicine. Limited computing power, memory storage and energy consumption, however, will continue to be bottlenecks for real time 3D mobile graphics even with the advanced GPUs and hardware in current mobile devices. For example, the theoretical performance of the next generation of mobile GPUs is estimated at 400-500 GFLOPS, but 50% of this will be reduced by the power and heat limitations. Now a major requirement of future mobile applications will be extremely high resolution displays (i.e. 4K+). For the realistic rendering, physics and haptics responses required by medical MR applications, a minimum of 30 fps (5000-6000 GFLOPS) is the performance target. This represents a 20 times increase on what is available from the current state of the art technology. The future of MR in medicine may therefore rely on cloud operations and parallelisation on a massive scale.

5.5 Validation Studies

Any use of technology within a medical scenario must be proven to work, particularly where patients are involved. Nearly all of the surgical applications cited in this paper are carrying out these studies on phantom models or cadavers, and reported results are promising. The step to use MR for surgical guidance on live patients has yet to be made. The same is true for using MR for other forms of patient treatment. One of the few commercial tools currently available in the clinic - VeinViewer Vision - has provided mixed results from patient validation studies. For rehabilitation applications, the current situation is different. MR has already been shown to work with patients on both physical and neurological tasks. Rehabilitation is a promising area for the adoption of MR and AR techniques in the near future.

As well as the technical challenges that have been the focus of this paper, there are other factors that must be overcome, for example, the often reluctance of the medical profession to embrace changes in their field [121]. There is much momentum starting to appear, however, and we predict that the medical domain is on the cusp of adopting MR technologies into everyday practice.


  • [1] Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. Augmented reality: A class of displays on the reality-virtuality continuum. In Photonics for industrial applications, pages 282–292. International Society for Optics and Photonics, 1995.
  • [2] Paul Milgram and Fumio Kishino. A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems, 77(12):1321–1329, 1994.
  • [3] Ronald T Azuma. A survey of augmented reality. Presence: Teleoperators and virtual environments, 6(4):355–385, 1997.
  • [4] H. Fuchs, M. A. Livingston, R. Raskar, D. Colucci, K. Keller, A. State, J. R. Crawford, P. Rademacher, S. H. Drake, and A. A. Meyer. Augmented reality visualization for laparoscopic surgery. In W. M. Wells, A. Colchester, and S. Delp, editors, Medical Image Computing and Computer-Assisted Intervention - Miccai’98, volume 1496, pages 934–943. Springer Berlin Heidelberg, 1998.
  • [5] Feng Zhou, Henry Been-Lirn Duh, and Mark Billinghurst. Trends in augmented reality tracking, interaction and display: A review of ten years of ismar. In Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 193–202. IEEE Computer Society, 2008.
  • [6] DWF Van Krevelen and R Poelman. A survey of augmented reality technologies, applications and limitations. International Journal of Virtual Reality, 9(2):1, 2010.
  • [7] Ling-Li Li, Guohua Ding, Nan Feng, Ming-Huang Wang, and Yuh-Shan Ho. Global stem cell research trend: Bibliometric analysis as a tool for mapping of trends from 1991 to 2006. Scientometrics, 80(1):39–58, 2009.
  • [8] Jui-Long Hung and Ke Zhang. Examining mobile learning trends 2003–2008: A categorical meta-trend analysis using text mining techniques. Journal of Computing in Higher education, 24(1):1–17, 2012.
  • [9] Sai Kranthi Vanga, Ashutosh Singh, Brinda Harish Vagadia, and Vijaya Raghavan. Global food allergy research trend: a bibliometric analysis. Scientometrics, 105(1):203–213, 2015.
  • [10] Arindam Dey, Mark Billinghurst, Robert W. Lindeman, and J. Edward Swan II. A systematic review of usability studies in augmented reality between 2005 and 2014. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), Sep 2016.
  • [11] A Ganjihal Gireesh, MP Gowda, et al. Acm transactions on information systems (1989–2006): A bibliometric study. Information Studies, 14(4):223–234, 2008.
  • [12] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation.

    the Journal of machine Learning research

    , 3:993–1022, 2003.
  • [13] Elsevier. Content - scopus — elsevier.
  • [14] Thomson Reuters. Web of knowledge - real facts - ip & science - thomson reuters., 2016.
  • [15] Hirokazu Kato and Mark Billinghurst. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Augmented Reality, 1999.(IWAR’99) Proceedings. 2nd IEEE and ACM International Workshop on, pages 85–94. IEEE, 1999.
  • [16] Jannick P Rolland and Henry Fuchs. Optical versus video see-through head-mounted displays in medical visualization. Presence, 9(3):287–309, 2000.
  • [17] Thomas Olsson and Markus Salo. Online user survey on current mobile augmented reality applications. In Mixed and Augmented Reality (ISMAR), 2011 10th IEEE International Symposium on, pages 75–84. IEEE, 2011.
  • [18] Ivan E Sutherland. A head-mounted three dimensional display. In Proceedings of the December 9-11, 1968, fall joint computer conference, part I, pages 757–764. ACM, 1968.
  • [19] F. Cosentino, N. W. John, and J. Vaarkamp. An overview of augmented and virtual reality applications in radiotherapy and future developments enabled by modern tablet devices. Journal of Radiotherapy in Practice, 13:350–364, 2014.
  • [20] Min Joung Kim, Joon Min Park, Nuga Rhee, Sang Mo Je, Seong Hee Hong, Young Mock Lee, Sung Phil Chung, and Seung Ho Kim. Efficacy of veinviewer in pediatric peripheral intravenous access: a randomized controlled trial. European journal of pediatrics, 171(7):1121–1125, 2012.
  • [21] Peter Szmuk, Jeffrey Steiner, Radu B Pop, Alan Farrow-Gillespie, Edward J Mascha, and Daniel I Sessler. The veinviewer vascular imaging system worsens first-attempt cannulation rate for experienced nurses in infants and children with anticipated difficult intravenous access. Anesthesia & Analgesia, 116(5):1087–1092, 2013.
  • [22] Yee Mon Aung, Adel Al-Jumaily, and Khairul Anam. A novel upper limb rehabilitation system with self-driven virtual arm illusion. In Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pages 3614–3617. IEEE, 2014.
  • [23] M Shamim Hossain, Sandro Hardy, Atif Alamri, Abdulhameed Alelaiwi, Verena Hardy, and Christoph Wilhelm. Ar-based serious game framework for post-stroke rehabilitation. Multimedia Systems, pages 1–16, 2015.
  • [24] Carlos Vidrios-Serrano, Isela Bonilla, Flavio Vigueras-Gomez, and Marco Mendoza. Development of a haptic interface for motor rehabilitation therapy using augmented reality. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pages 1156–1159. IEEE, 2015.
  • [25] Benedict Tan and Oliver Tian. Short paper: Using bsn for tele-health application in upper limb rehabilitation. In Internet of Things (WF-IoT), 2014 IEEE World Forum on, pages 169–170. IEEE, 2014.
  • [26] Simona Caraiman, Andrei Stan, Nicolae Botezatu, Paul Herghelegiu, Robert Gabriel Lupu, and Alin Moldoveanu. Architectural design of a real-time augmented feedback system for neuromotor rehabilitation. In Control Systems and Computer Science (CSCS), 2015 20th International Conference on, pages 850–855. IEEE, 2015.
  • [27] Marta Kersten-Oertel, Pierre Jannin, and D Louis Collins. The state of the art of visualization in mixed reality image guided surgery. Computerized Medical Imaging and Graphics, 37(2):98–112, 2013.
  • [28] Benjamin J Dixon, Michael J Daly, Harley HL Chan, Allan Vescan, Ian J Witterick, and Jonathan C Irish. Inattentional blindness increased with augmented reality surgical navigation. American journal of rhinology & allergy, 28(5):433–437, 2014.
  • [29] Antonio Meola, Fabrizio Cutolo, Marina Carbone, Federico Cagnazzo, Mauro Ferrari, and Vincenzo Ferrari. Augmented reality in neurosurgery: a systematic review. Neurosurgical Review, pages 1–12, 2016.
  • [30] Ivan Cabrilo, Philippe Bijlenga, and Karl Schaller. Augmented reality in the surgery of cerebral aneurysms: a technical report. Operative Neurosurgery, 10(2):252–261, 2014.
  • [31] D Inoue, B Cho, M Mori, Y Kikkawa, T Amano, A Nakamizo, K Yoshimoto, M Mizoguchi, M Tomikawa, J Hong, et al. Preliminary study on the clinical application of augmented reality neuronavigation. Journal of Neurological Surgery Part A: Central European Neurosurgery, 74(02):071–076, 2013.
  • [32] Stéphane Nicolau, Luc Soler, Didier Mutter, and Jacques Marescaux. Augmented reality in laparoscopic surgical oncology. Surgical oncology, 20(3):189–201, 2011.
  • [33] Masahiko Nakamoto, Osamu Ukimura, Kenneth Faber, and Inderbir S Gill. Current progress on augmented reality visualization in endoscopic surgery. Current opinion in urology, 22(2):121–126, 2012.
  • [34] Anant S Vemuri, Jungle Chi-Hsiang Wu, Kai-Che Liu, and Hurng-Sheng Wu. Deformable three-dimensional model architecture for interactive augmented reality in minimally invasive surgery. Surgical endoscopy, 26(12):3655–3662, 2012.
  • [35] Rong Wang, Zheng Geng, Zhaoxing Zhang, and Renjing Pei. Visualization techniques for augmented reality in endoscopic surgery. In International Conference on Medical Imaging and Virtual Reality, pages 129–138. Springer, 2016.
  • [36] N. Haouchine, J. Dequidt, I. Peterlik, E. Kerrien, M. O.Berger, and S. Cotin. Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery. IEEE International Symposium on Mixed and Augmented Reality, 2013.
  • [37] N. Haouchine, J. Dequidt, L. Peterlik, E.Kerrien, M.-O.Berger, and S.Cotin. Deformation-based augmented reality for hepatic surgery. NextMed/Medicine Meets Virtual Reality, 20(284):182–188, 2013.
  • [38] Kate A Gavaghan, Matthias Peterhans, Thiago Oliveira-Santos, and Stefan Weber. A portable image overlay projection device for computer-aided open liver surgery. IEEE Transactions on Biomedical Engineering, 58(6):1855–1864, 2011.
  • [39] Ee Ping Ong, Jimmy Addison Lee, Jun Cheng, Beng Hai Lee, Guozhen Xu, Augustinus Laude, Stephen Teoh, Tock Han Lim, Damon WK Wong, and Jiang Liu. An augmented reality assistance platform for eye laser surgery. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pages 4326–4329. IEEE, 2015.
  • [40] Junchen Wang, Hideyuki Suenaga, Kazuto Hoshi, Liangjing Yang, Etsuko Kobayashi, Ichiro Sakuma, and Hongen Liao. Augmented reality navigation with automatic marker-free image registration using 3-d image overlay for dental surgery. Biomedical Engineering, IEEE Transactions on, 61(4):1295–1304, 2014.
  • [41] Giovanni Badiali, Vincenzo Ferrari, Fabrizio Cutolo, Cinzia Freschi, Davide Caramella, Alberto Bianchi, and Claudio Marchetti. Augmented reality as an aid in maxillofacial surgery: validation of a wearable system allowing maxillary repositioning. Journal of Cranio-Maxillofacial Surgery, 42(8):1970–1976, 2014.
  • [42] Philip Edgcumbe, Rohit Singla, Philip Pratt, Caitlin Schneider, Christopher Nguan, and Robert Rohling. Augmented reality imaging for robot-assisted partial nephrectomy surgery. In International Conference on Medical Imaging and Virtual Reality, pages 139–150. Springer, 2016.
  • [43] Jordan Bano, Tomohiko Akahoshi, Ryu Nakadate, Byunghyun Cho, and Makoto Hashizume. Augmented reality guidance with electromagnetic tracking for transpyloric tube insertion. In International Conference on Medical Imaging and Virtual Reality, pages 198–207. Springer, 2016.
  • [44] John M Racadio, Rami Nachabe, Robert Homan, Ross Schierling, Judy M Racadio, and Draženko Babić. Augmented reality on a c-arm system: A preclinical assessment for percutaneous needle localization. Radiology, 281(1):249–255, 2016.
  • [45] Adrian Elmi-Terander, Halldor Skulason, Michael Söderman, John Racadio, Robert Homan, Drazenko Babic, Nijs van der Vaart, and Rami Nachabe. Surgical navigation technology based on augmented reality and integrated 3d intraoperative imaging: A spine cadaveric feasibility and accuracy study. Spine, 41(21):E1303, 2016.
  • [46] Timothy R Coles, Nigel W John, Derek A Gould, and Darwin G Caldwell. Integrating haptics with augmented reality in a femoral palpation and needle insertion training simulation. Haptics, IEEE Transactions on, 4(3):199–209, 2011.
  • [47] Caitlin T Yeo, Tamas Ungi, U Paweena, Andras Lasso, Robert C McGraw, Gabor Fichtinger, et al. The effect of augmented reality training on percutaneous needle placement in spinal facet joint injections. IEEE Transactions on Biomedical Engineering, 58(7):2031–2037, 2011.
  • [48] Andrew Strickland, Katherine Fairhurst, Chris Lauder, Peter Hewett, and Guy Maddern. Development of an ex vivo simulated training model for laparoscopic liver resection. Surgical endoscopy, 25(5):1677–1682, 2011.
  • [49] Wen Tang, Tao Ruan Wan, Derek A. Gould, Thien How, and Nigel W. John. A stable and real-time nonlinear elastic approach to simulating guidewire and catheter insertions based on cosserat rod. IEEE Trans. Biomed. Engineering, 59(8):2211–2218, 2012.
  • [50] Kamyar Abhari, John SH Baxter, Elvis Chen, Ali R Khan, Terry M Peters, Sandrine de Ribaupierre, and Roy Eagleson. Training for planning tumour resection: augmented reality and human factors. Biomedical Engineering, IEEE Transactions on, 62(6):1466–1477, 2015.
  • [51] Irene Cheng, Rui Shen, Richard Moreau, Vicenzo Brizzi, Nathaniel Rossol, and Anup Basu. An augmented reality framework for optimization of computer assisted navigation in endovascular surgery. In Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pages 5647–5650. IEEE, 2014.
  • [52] Helen Monkman and Andre W Kushniruk. A see through future: augmented reality and health information systems. In ITCH, pages 281–285, 2015.
  • [53] Egui Zhu, Arash Hadadgar, Italo Masiello, and Nabil Zary. Augmented reality in healthcare education: an integrative review. PeerJ, 2:e469, 2014.
  • [54] C. Kamphuis, E. Barsom, M. Schijven, and N. Christoph. Augmented reality in medical education? Perspectives on Medical Education, pages 300–311, 2014.
  • [55] Poonpong Boonbrahm and Charlee Kaewrat. Assembly of the virtual model with real hands using augmented reality technology. In Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments, pages 329–338. Springer, 2014.
  • [56] Maria De Marsico, Stefano Levialdi, Michele Nappi, and Stefano Ricciardi. Figi: floating interface for gesture-based interaction. Journal of Ambient Intelligence and Humanized Computing, 5(4):511–524, 2014.
  • [57] Edimo Sousa Silva and Maria Andreia Formico Rodrigues. Gesture interaction and evaluation using the leap motion for medical visualization. In Virtual and Augmented Reality (SVR), 2015 XVII Symposium on, pages 160–169. IEEE, 2015.
  • [58] Timothy Coles, Dwight Meglan, and Nigel W John. The role of haptics in medical training simulators: a survey of the state of the art. Haptics, IEEE Transactions on, 4(1):51–66, 2011.
  • [59] Seokhee Jeon, Seungmoon Choi, and Matthias Harders. Rendering virtual tumors in real tissue mock-ups using haptic augmented reality. Haptics, IEEE Transactions on, 5(1):77–84, 2012.
  • [60] Seokhee Jeon, Benjamin Knoerlein, Matthias Harders, and Seungmoon Choi. Haptic simulation of breast cancer palpation: A case study of haptic augmented reality. In Mixed and Augmented Reality (ISMAR), 2010 9th IEEE International Symposium on, pages 237–238. IEEE, 2010.
  • [61] Daniel Andersen, Voicu Popescu, Maria Eugenia Cabrera, Aditya Shanghavi, Gerardo Gpmez, Sherri Marley, Brian Mullis, and Juan Wachs. Avoiding focus shifts in surgical telementoring using an augmented reality transparent display. In MMVR 22, pages 9–14, 2016.
  • [62] Juho Rantakari, Ashley Colley, and Jonna Häkkilä. Exploring ar poster as an interface to personal health data. In Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia, pages 422–425. ACM, 2015.
  • [63] Thomas Kilgus, Eric Heim, Sven Haase, Sabine Prüfer, Michael Müller, Alexander Seitel, Markus Fangerau, Tamara Wiebe, Justin Iszatt, Heinz-Peter Schlemmer, et al. Mobile markerless augmented reality and its application in forensic medicine. International journal of computer assisted radiology and surgery, 10(5):573–586, 2015.
  • [64] José Soeiro, Ana Paula Cláudio, Maria Beatriz Carmo, and Hugo Alexandre Ferreira. Visualizing the brain on a mixed reality smartphone application. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pages 5090–5093. IEEE, 2015.
  • [65] Juan A Juanes, Daniel Hernández, Pablo Ruisoto, Elena García, Gabriel Villarrubia, and Alberto Prats. Augmented reality techniques, using mobile devices, for learning human anatomy. In Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality, pages 7–11. ACM, 2014.
  • [66] Matthew Kramers, Ryan Armstrong, Saeed M Bakhshmand, Aaron Fenster, Sandrine de Ribaupierre, and Roy Eagleson. Evaluation of a mobile augmented reality application for image guidance of neurosurgical interventions. Stud Health Technol Inform, 196:204–8, 2014.
  • [67] Christoph Noll, Bernhard Häussermann, Ute von Jan, Ulrike Raap, and Urs-Vito Albrecht. Demo: Mobile augmented reality in medical education: An application for dermatology. In Proceedings of the 2014 Workshop on Mobile Augmented Reality and Robotic Technology-based Systems, MARS ’14, pages 17–18, New York, NY, USA, 2014. ACM.
  • [68] Jaime Andres Garcia and Karla Felix Navarro. The mobile rehapp: an ar-based mobile game for ankle sprain rehabilitation. In Serious Games and Applications for Health (SeGAH), 2014 IEEE 3rd International Conference on, pages 1–6. IEEE, 2014.
  • [69] Ioan Virag, Lacramioara Stoicu-Tivadar, and Elena Amaricai. Browser-based medical visualization system. In Applied Computational Intelligence and Informatics (SACI), 2014 IEEE 9th International Symposium on, pages 355–359. IEEE, 2014.
  • [70] Jeronimo Grandi, Anderson Maciel, Henrique Debarba, and Dinamar Zanchet. Spatially aware mobile interface for 3d visualization and interactive surgery planning. In Serious Games and Applications for Health (SeGAH), 2014 IEEE 3rd International Conference on, pages 1–8. IEEE, 2014.
  • [71] Henrique Galvan Debarba, Jerônimo Grandi, Anderson Maciel, and Dinamar Zanchet. Anatomic hepatectomy planning through mobile display visualization and interaction. In MMVR, pages 111–115, 2012.
  • [72] Younggeun Choi. Ubi-rehab: An android-based portable augmented reality stroke rehabilitation system using the eglove for multiple participants. In Virtual Rehabilitation (ICVR), 2011 International Conference on, pages 1–2. IEEE, 2011.
  • [73] DAQRI. Open source augmented reality sdk., 2016. [Accessed 29 4 2016].
  • [74] PTC. Vuforia. [Accessed 29 4 2016].
  • [75] Goh Chuan Meng, A Shahzad, NM Saad, Aamir Saeed Malik, and Fabrice Meriaudeau. Prototype design for wearable veins localization system using near infrared imaging technique. In Signal Processing & Its Applications (CSPA), 2015 IEEE 11th International Colloquium on, pages 112–115. IEEE, 2015.
  • [76] Tzyh-Chyang Chang, Chung-Hung Hsieh, Chung-Hsien Huang, Ji-Wei Yang, Shih-Tseng Lee, Chieh-Tsai Wu, and Jiann-Der Lee. Interactive medical augmented reality system for remote surgical assistance. Appl. Math, 9(1L):97–104, 2015.
  • [77] Chung-Hung Hsieh and Jiann-Der Lee. Markerless augmented reality via stereo video see-through head-mounted display device. Mathematical Problems in Engineering, 2015, 2015.
  • [78] Huixiang Wang, Fang Wang, Anthony Peng Yew Leong, Lu Xu, Xiaojun Chen, and Qiugen Wang. Precision insertion of percutaneous sacroiliac screws using a novel augmented reality-based navigation system: a pilot study. International orthopaedics, pages 1–7, 2015.
  • [79] Balázs Vigh, Steffen Müller, Oliver Ristow, Herbert Deppe, Stuart Holdstock, Jürgen den Hollander, Nassir Navab, Timm Steiner, and Bettina Hohlweg-Majert. The use of a head-mounted display in oral implantology: a feasibility study. International journal of computer assisted radiology and surgery, 9(1):71–78, 2014.
  • [80] Liang Hu, Manning Wang, and Zhijian Song. A convenient method of video see-through augmented reality based on image-guided surgery system. In Internet Computing for Engineering and Science (ICICSE), 2013 Seventh International Conference on, pages 100–103. IEEE, 2013.
  • [81] Yuichiro Abe, Shigenobu Sato, Koji Kato, Takahiko Hyakumachi, Yasushi Yanagibashi, Manabu Ito, and Kuniyoshi Abumi. A novel 3d guidance system using augmented reality for percutaneous vertebroplasty: technical note. Journal of Neurosurgery: Spine, 19(4):492–501, 2013.
  • [82] Ehsan Azimi, Jayfus Doswell, and Peter Kazanzides. Augmented reality goggles with an integrated tracking system for navigation in neurosurgery. In Virtual Reality Short Papers and Posters (VRW), 2012 IEEE, pages 123–124. IEEE, 2012.
  • [83] Troels Blum, Ralf Stauder, Ekkehard Euler, and Nassir Navab. Superman-like x-ray vision: towards brain-computer interfaces for medical augmented reality. In Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on, pages 271–272. IEEE, 2012.
  • [84] Toshiaki Tanaka. Clinical application for assistive engineering–mixed reality rehabilitation. Journal of Medical and Biological Engineering, 31(4):277–282, 2010.
  • [85] Matthias Wieczorek, André Aichert, Oliver Kutter, Christoph Bichlmeier, Jürgen Landes, Sandro Michael Heining, Ekkehard Euler, and Nassir Navab. Gpu-accelerated rendering for medical augmented reality in minimally-invasive procedures. In Bildverarbeitung für die Medizin, pages 102–106, 2010.
  • [86] Juani Bretón-López, Soledad Quero, Cristina Botella, Azucena García-Palacios, Rosa Maria Baños, and Mariano Alcañiz. An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychology, Behavior, and Social Networking, 13(6):705–710, 2010.
  • [87] Atif Alamri, Jongeun Cha, and Abdulmotaleb El Saddik. Ar-rehab: An augmented reality framework for poststroke-patient rehabilitation. Instrumentation and Measurement, IEEE Transactions on, 59(10):2554–2563, 2010.
  • [88] Carlo Tomasi and Takeo Kanade. Detection and tracking of point features. School of Computer Science, Carnegie Mellon Univ. Pittsburgh, 1991.
  • [89] Paul J Best and Neil D McKay. A method for registration of 3-d shapes. IEEE Transactions on pattern analysis and machine intelligence, 14(2):239–256, 1992.
  • [90] Ramesh Raskar, Greg Welch, and Henry Fuchs. Spatially augmented reality. In First IEEE Workshop on Augmented Reality (IWAR’98), pages 11–20. Citeseer, 1998.
  • [91] Hongen Liao, Takashi Inomata, Ichiro Sakuma, and Takeyoshi Dohi. 3-d augmented reality for mri-guided surgery using integral videography autostereoscopic image overlay. Biomedical Engineering, IEEE Transactions on, 57(6):1476–1486, 2010.
  • [92] Ma Meng, Pascal Fallavollita, Troels Blum, Ulrich Eck, Christian Sandor, Simon Weidert, Jens Waschke, and Nassir Navab. Kinect for interactive ar anatomy learning. In Mixed and Augmented Reality (ISMAR), 2013 IEEE International Symposium on, pages 277–278. IEEE, 2013.
  • [93] Chen Shi, Brian C Becker, and Cameron N Riviere. Inexpensive monocular pico-projector-based augmented reality display for surgical microscope. In Computer-Based Medical Systems (CBMS), 2012 25th International Symposium on, pages 1–6. IEEE, 2012.
  • [94] Adrian S Johnson and Yu Sun. Exploration of spatial augmented reality on person. In Virtual Reality (VR), 2013 IEEE, pages 59–60. IEEE, 2013.
  • [95] Patrick Honeck, Gunnar Wendt-Nordahl, Jens Rassweiler, and Thomas Knoll. Three-dimensional laparoscopic imaging improves surgical performance on standardized ex-vivo laparoscopic tasks. Journal of Endourology, 26(8):1085–1088, Aug 2012.
  • [96] Danail Stoyanov, Ara Darzi, and Guang Zhong Yang. A practical approach towards accurate dense 3d depth recovery for robotic laparoscopic surgery. Comput Aided Surg, 10(4):199–208, Jul 2005.
  • [97] Primoz Markelj, Dejan Tomaževič, Bostjan Likar, and Franjo Pernuš. A review of 3d/2d registration methods for image-guided interventions. Medical image analysis, 16(3):642–661, 2012.
  • [98] Oscar G Grasa, Ernesto Bernal, Santiago Casado, Iñigo Gil, and JM Montiel. Visual slam for handheld monocular endoscope. Medical Imaging, IEEE Transactions on, 33(1):135–146, 2014.
  • [99] Pablo Lamata, Adinda Freudenthal, Alicia Cano, Denis Kalkofen, Dieter Schmalstieg, Edvard Naerum, Eigil Samset, Enrique J Gómez, Francisco M Sánchez-Margallo, Hugo Furtado, et al. Augmented reality for minimally invasive surgery: overview and some recent advances. INTECH Open Access Publisher, 2010.
  • [100] Jacques Marescaux and Michele Diana. Next step in minimally invasive surgery: hybrid image-guided surgery. Journal of pediatric surgery, 50(1):30–36, 2015.
  • [101] Daniel J Mirota, Masaru Ishii, and Gregory D Hager. Vision-based navigation in image-guided interventions. Annual review of biomedical engineering, 13:297–319, 2011.
  • [102] Lucio Tommaso De Paolis and Giovanni Aloisio. Augmented reality in minimally invasive surgery. In Advances in Biomedical Sensing, Measurements, Instrumentation and Systems, pages 305–320. Springer, 2010.
  • [103] MWMG Dissanayake, Paul Newman, Steven Clark, Hugh F Durrant-Whyte, and Michael Csorba. A solution to the simultaneous localization and map building (slam) problem. Robotics and Automation, IEEE Transactions on, 17(3):229–241, 2001.
  • [104] Robert Castle, Georg Klein, and David W Murray. Video-rate localization in multiple maps for wearable augmented reality. In Wearable Computers, 2008. ISWC 2008. 12th IEEE International Symposium on, pages 15–22. IEEE, 2008.
  • [105] Georg Klein and David Murray. Parallel tracking and mapping for small ar workspaces. In Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on, pages 225–234. IEEE, 2007.
  • [106] Oscar G Grasa, J Civera, A Guemes, V Munoz, and JMM Montiel. Ekf monocular slam 3d modeling, measuring and augmented reality from endoscope image sequences. In Medical image computing and computer-assisted intervention (MICCAI), volume 2, 2009.
  • [107] Oscar G Grasa, Javier Civera, and JMM Montiel. Ekf monocular slam with relocalization for laparoscopic sequences. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 4816–4821. IEEE, 2011.
  • [108] Peter Mountney and Guang-Zhong Yang. Motion compensated slam for image guided surgery. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2010, pages 496–504. Springer, 2010.
  • [109] Peter Mountney, Johannes Fallert, Stephane Nicolau, Luc Soler, and Philip W. Mewes. An augmented reality framework for soft tissue surgery. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2014, January 2014.
  • [110] Long Chen, Wen Tang, and Nigel W. John. Real-time geometry-aware augmented reality in minimally invasive surgery. Healthcare Technology Letters, 2017.
  • [111] Gustavo A Puerto-Souza and Gian-Luca Mariottini. A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images. Medical Imaging, IEEE Transactions on, 32(7):1201–1214, 2013.
  • [112] Peter Mountney, Benny Lo, Surapa Thiemjarus, Danail Stoyanov, and Guang Zhong-Yang. A probabilistic framework for tracking deformable soft tissue in minimally invasive surgery. Med Image Comput Comput Assist Interv, 10(Pt 2):34–41, 2007.
  • [113] Rosalie Plantefève, Igor Peterlik, Nazim Haouchine, and Stéphane Cotin. Patient-specific biomechanical modeling for guidance during minimally-invasive hepatic surgery. Annals of biomedical engineering, 44(1):139–153, 2016.
  • [114] Nazim Haouchine, Stephane Cotin, Igor Peterlik, Jeremie Dequidt, Mario Sanz Lopez, Erwan Kerrien, and Marie-Odile Berger. Impact of soft tissue heterogeneity on augmented reality for liver surgery. Visualization and Computer Graphics, IEEE Transactions on, 21(5):584–597, 2015.
  • [115] Danail Stoyanov. Stereoscopic scene flow for robotic assisted minimally invasive surgery. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012, pages 479–486. Springer, 2012.
  • [116] Ping-Lin Chang, Ankur Handa, Andrew J. Davison, Danail Stoyanov, and Philip “Eddie” Edwards. Robust real-time visual odometry for stereo endoscopy using dense quadrifocal tracking. In Information Processing in Computer-Assisted Interventions, pages 11–20. Springer Science + Business Media, 2014.
  • [117] Wen Tang and Tao Ruan Wan. Constraint-based soft tissue simulation for virtual surgical training. IEEE Trans. Biomed. Engineering, 61(11):2698–2706, 2014.
  • [118] Xin Chen, Lejing Wang, Pascal Fallavollita, and Nassir Navab. Precise x-ray and video overlay for augmented reality fluoroscopy. International journal of computer assisted radiology and surgery, 8(1):29–38, 2013.
  • [119] Long Chen, Wen Tang, Nigel W. John, Tao Ruan Wan, and Jian Jun Zhang. Augmented reality for depth cues in monocular minimally invasive surgery. arXiv pre-print, abs/1703.01243, 2017.
  • [120] M. Harders, G. Bianchi, B. Knoerlein, and G. Szekely. Calibration, registration, and synchronization for high precision augmented reality haptics. IEEE Transactions on Visualization and Computer Graphics, 15(1):138–149, January 2009.
  • [121] Michael Gillam, Craig Feied, Jonathan Handler, Eliza Moody, Ben Shneiderman, Catherine Plaisant, Mark S Smith, and John Dickason. The healthcare singularity and the age of semantic medicine. The Fourth Paradigm: The Future of Information Science and Data Intensive Computing, pages 57–64, 2009.