Augmented-Reality-Based Visualization of Navigation Data of Mobile Robots on the Microsoft Hololens – Possibilities and Limitations

The demand for mobile robots has rapidly increased in recent years due to the flexibility and high variety of application fields comparing to static robots. To deal with complex tasks such as navigation, they work with high amounts of different sensor data making it difficult to operate with for non-experts. To enhance user understanding and human robot interaction, we propose an approach to visualize the navigation stack within a cutting edge 3D Augmented Reality device – the Microsoft Hololens. Therefore, relevant navigation stack data including laser scan, environment map and path planing data are visualized in 3D within the head mounted device. Based on that prototype, we evaluate the Hololens in terms of computational capabilities and limitations for dealing with huge amount of real-time data. Results show that the Hololens is capable of a proper visualization of huge amounts of sensor data. We demonstrate a proper visualization of navigation stack data in 3D within the Hololens. However, there are limitations when transferring and displaying different kinds of data simultaneously.

READ FULL TEXT VIEW PDF

page 1

page 4

12/27/2019

A 3D-Deep-Learning-based Augmented Reality Calibration Method for Robotic Environments using Depth Sensor Data

Augmented Reality and mobile robots are gaining much attention within in...
06/12/2022

Imagination-augmented Navigation Based on 2D Laser Sensor Observations

Autonomous navigation of mobile robots is an essential task for various ...
10/05/2021

Reducing Gaze Distraction for Real-time Vibration Monitoring Using Augmented Reality

Operators want to maintain awareness of the structure being tested while...
05/31/2016

Robust Deep-Learning-Based Road-Prediction for Augmented Reality Navigation Systems

This paper proposes an approach that predicts the road course from camer...
08/28/2020

iviz: A ROS Visualization App for Mobile Devices

In this work, we introduce iviz, a mobile application for visualizing RO...
12/06/2012

Autonomous Navigation by Robust Scan Matching Technique

For effective autonomous navigation,estimation of the pose of the robot ...
07/06/2022

Microvision: Static analysis-based approach to visualizing microservices in augmented reality

Microservices are supporting digital transformation; however, fundamenta...

I Introduction

The usage of mobile robots has increased in recent years within all sectors of society including industries due to their flexibility and the variety of use cases they can operate in. The future factories will rely on the help of mobile robots for tasks such as procurement of components, transportation or commissioning [19] [21]. Due to their complexity, operation and understanding of mobile robots is still a privilege to experts [11]. Nonetheless, understanding and operation, not just for researchers and developers, but also employees with less technical understanding or prior instruction is paramount in a future environment to make interaction and collaboration processes more time- and cost-efficient, easy to operate and safe [14]. Therefore, Augmented Reality (AR) had been subject to various scientific publications which proved the great potential and ability to enhance efficiency in human-robot-interaction (HRI), -collaboration and support understanding [12], [13], [6]. AR has the potential to aid the user with help of spatial information and the combination with intuitive interaction technology, e.g. gestures [4]. Our previous work focused on AR-based simplification of robot programming with the help of visualizing spatial information and intuitive gesture commands [16]. Other work used AR for enhanced visualization of robot data or for multi modal teleoperation [8] [9]. By reason of computational power, most work provide AR with handheld devices or external monitors in 2D. However, the advantages of 3D visualization have proven to be more intuitive and support understanding even further compared to 2D display [7] [24].
Head mounted displays (HMDs) are considered to visualize 3D data directly within the users gaze for spatial display of information which supports understanding. Furthermore, HMDs bear the advantage that the user will have both hands free for other tasks. This could, for instance, help surgeons to display relevant data into the gaze while still doing the surgery. Regarding mobile robotics, navigation is one important aspect to consider. It is relying to work with numerous kinds of sensor data including laser scan and the environment map. The visualization of 3D sensor data for robot navigation within a HMD will increase user understanding, reduce operation cycles and make monitoring and maintenance more effective.
One of the main bottlenecks of state of the art HMDs is their limited computation power. Especially the visualization of data used for mobile robot navigation is a demanding task as real-time data such as laser scan continuously change, making visualization more demanding. On this account, this papers motivation is to evaluate the possibility of displaying navigation stack data on a state of the art head mounted AR device within the real operation environment of mobile robot. Therefore, we propose a way to display relevant navigation stack data into the Microsoft Hololens. Finally, we evaluate the Hololens in terms of computational capabilities to give insight whether it is capable of appropriately visualizing navigation stack sensor data for mobile robots.
The rest of the paper is structured as follows. Sec. II will give an overview of related work. Sec. III will present the conceptional design of our approach while sec. IV will describe the implementation of the prototype with a demonstration in In Sec. V. Sec. VI describes the experimental setup followed by Sec. VII where the results will be evaluated. Finally, Sec. IX will give a conclusion.

Ii Related Work

Due to the huge potential of AR, integration into various processes and scenarios was evaluated. Hashimoto et al. [12]

proposed a teleoperation use case for mobile robots using gesture commands on a tablet. The researchers conducted a study where users controlled a robot with gestures to solve tasks. The study concludes an enhancement in understanding and simplified operation when using the proposed AR prototype to control the robot with gesture commands compared to conventional methods. However, participants requested richer data visualization in order to enhance robot understanding even further. Webel et al.

[25] introduced an AR application for training in maintenance and assembly tasks and conducted a case study to observe enhancements of AR based training of manufacture workers compared to no AR training beforehand. Results show that AR has great potential in reducing error rates and performance time due to tactile feedback and rich virtual information display for guidance. As a matter of computational capabilities, the majority of aforementioned work achieved AR through handheld devices or external monitors in 2D. Various work including the work from Fuchs et al. [7] and Velayutham et al. [24] have shown the advantages of 3D visualization compared to 2D. Fuchs et al. proved that through a study for medical surgery, where participants using 3D visualization of instruction sets were considerably faster and more accurate. Velayutham et.al. concludes that 3D visualization for medical surgery results in faster operation times due to better understanding. Head mounted devices (HMDs) emerged in recent years providing the possibility for 3D data display into the real environment. Furthermore, they bear the advantage of leaving the user with both hands free to operate with other tasks while visualization of spatial information is directly displayed in the user gaze. As a first stand alone AR-HMD, the Hololens was introduced in 2016 by Microsoft. It contains an own CPU as well as a holographic processing unit (HPU) which comes along with 2 Gb of RAM and 64 Gb of flash memory. Because of its possibility to work remotely without any external entity, the Hololens has been widely considered for integration to enhance HRI. Furthermore, this bears advantages such as freedom in navigation or independent operation. Previous work including the work from Liu et al. [17], Coppens [5] and Vassallo et al. [23] have evaluated general technical aspects of the Hololens such as the accuracy of spatial mapping or gaze gesture commands. Vassallo et al. focuses on evaluation of hologram placement accuracy and spatial mapping. They concluded big potential to work in industrial setups. Guhl et al. [10] proposed a framework design to work with robots and Head Mounted 3D AR devices. The challenge to establish communication between the two systems, Head mounted device and robot, was achieved by using a ’middleman’ in-between instance to transfer information between the two entities. Krupke et al. [15] proposed a multi modal framework using the ROSSharp framework to achieve communication between the robot and the AR device without an external entity. This is demonstrated for the UR5 and Hololens to work with gaze gesture. The authors prove the enhancement and advantages of using HMDs to work with robots. In the context of robot sensor data visualization only Thorstensen [22] dealt with visualization within an AR device. The author was using the HTC Vive to stream 2D camera frames into the HMD and observed positive effects on user understanding. Sauer et al. [20] proposed an integration of the Hololens for application in surgical interventions. The authors displayed 3D anatomical models directly to real organs and showed high potential for improvement of surgeons actions. In the area of data visualization for sensor data of mobile robots, no paper was found which evaluates the capabilities of the Hololens. Considering the state of the art in head mounted device integration into robotics for sensor data visualization, this papers purpose is to evaluate the possibilities and limitations of the Hololens as a stand alone AR device for sensor data visualization of mobile robots. This is done in the context of visualizing the navigation stack for better user understanding.

Iii Conception Design

Like stated before, the main purpose of this paper is to evaluate the Hololens in terms of the capability in visualizing the navigation stack which contains huge and continuously changing sensor data like laser scan data, the environment map and path planing information. This is done in the context of a more intuitive human robot interaction and better robot understanding. Following data are key parameters for the navigation stack: first, laser scan data, second environment data and third navigation data. Laser scan data is evaluated for position and obstacle tracking. This is done in combination with an internal robot map. The robot continuously evaluates its laser scan data with the existing internal map to localize itself. For the global and local path planing, the combination of laser scan and map is used. Referring to the work from Thorstensen [22], visualizing laser scan data proved to help user understanding. The internal robot map should be visualized to provide additional understanding of the robot environment for obstacle awareness and to aid in navigation planning. Monferrer et al. [18] concludes that visualizing navigation path data improves quality of human robot interaction and understanding because future robot movement is being spatially displayed. In our case, navigation data includes the path the robot will take. This aids the user in understanding future robot intentions which is of great importance e.g. in environments with high amount of robots. For a correct visualization, two important things have to be considered: first the correct alignment of the different coordinate systems of both entities, robot and Hololens. Second, the data transmission from ROS to Hololens and vice versa.

Iii-a Hardware Setup

The hardware setup contains a mobile robot and a head mounted AR device - the Microsoft Hololens. We are working with a Kuka mobile Youbot running ROS Hydro on a Linux version 12.04 distribution. As a development environment for the Hololens, we use a windows notebook with Unity3D 2018 installed on which the application was developed. All preprocessing of sensor data is done directly at ROS site. Both entities, robot and Hololens, are connected to the same network and communicate through this via a WiFi connection.

Iii-B Coordinate System Alignment

Fig. 1: Conception of Coordinate System Alignment

The majority of previous work including [15] and [10]

relied on marker detection for pose estimation of the robot. However, for our case, the mobile robot can also move therefore it is necessary to ensure continuous localization of the robot. The main challenge is that both entities are mobile and a single marker tracking approach like used in similar works is not enough. As stated before, to work with the navigation stack, a map is needed. Thus, as a third entity the ROS Map produced by RVIZ has to be considered as well because all sensor data which will be retrieved from ROS topics will be with respect to the map coordinate system. We propose an alignment composed of 2 separate steps: an initial alignment of the robot and the Hololens with a marker detection and after that, continuous alignment with the spatial anchor capability of the Hololens. For this, following formula is to be considered:

(1)

is the transformation of the Hololens position and map.

is the pose estimation between the robot and the Hololens and

is the transformation of the robot to the origin of the map. The conception is depicted in fig. 1.

Iv Implementation

Iv-a Communication

Since ROS and the Hololens are working from different operating systems, an appropriate communication for message exchange is to be considered. Therefore, the Rosbridge protocol for external communication was used which follows a specific JSON notation for message exchange.

Fig. 2: ROS topics and hololens handling scrips

An open source project of Siemens called

ROSSharp [3] was used to simplify the process. It provides a framework for seamless communication between the two systems using web sockets. It uses specific publisher and subscriber classes which will publish/subscribe to ROS topics to send and publish messages in the required JSON format of Rosbridge. Handling scripts can be attached to the classes to further process the received data. The topics being used in this work as well as the associated handling scripts are depicted in fig. 2.

Iv-B Coordinate Systems alignment

Like stated before, the challenge is to align the two different coordinate systems of the robot and the Hololens together to ensure correct visualization of all data continuously. Since both entities can move, we propose an alignment composed of two steps: initial pose detection with ArUco markers [1] and second, using the spatial anchor capability of Hololens to place a virtual anchor for continuous alignment of Hololens and map. The Hololens provides the capability of spatial anchors which can memorize the exact position of every point in the environment even after termination of application. This is done with an internal SLAM algorithm based on the multiple environment cameras. The internal SLAM starts by scanning the whole environment with startup of the Hololens. The anchor should act as a common reference point and is placed based on the transformation of Robot and map . For ArUco marker tracking to be integrated into Hololens we used the open source implementation of [2]. This gives an appropriate marker tracking for initial pose alignment between robot and Hololens . For , we use the Adaptive Monte Carlo Localization package of ROS which provides the exact position of the robot with respect to the robot map. On that position the spatial anchor will be placed, providing a reference point for the Hololens.

Iv-C Sensor Visualization

After ensuring correct visualization through coordinate system alignment, the sensor data can be visualized within the Hololens. For that, data from ROS has to be converted, interpreted and visualized by the Hololens. The application is developed in Unity. To visualize the incoming data, 3D shapes, the so called game objects were used within the Unity. We used 3D spheres to visualize all data. A back end is used for converting raw sensor data from the laser scanner of the Youbot. This is done directly at ROS site. First, the raw laser scan data is visualized by writing an Unity script which interprets the angles and corresponding ranges and generate a game object at all positions to visualize the laserscan data. Therefore the so called meshes were used in unity for efficient processing. The environment map of RVIZ provides an occupancy grid which must be converted into a point cloud. Since occupancy grid data only provides 3 different values providing whether each pixel is occupied or not. A script to interpret and transform the values into a point cloud has been written and executed at ROS site. For the exact position, the size of the whole map is needed which can be extracted from the yaml files that comes along when creating RVIZ maps. For our case, the map size was 1060x448 pixels and each pixel represent 0.02 meters. The preprocessed data is then sent to the Hololens for visualization.

Fig. 3: Application Workflow

The Navigation data is visualized in form of navigation path of the robot. This information is extracted from the global path planer topic of the Youbot navigation stack. The path is divided into fewer positions and put into an array which is then send to the Hololens for visualization.
Fig. 3 depicts the workflow of the whole application. First, the initial alignment of Hololens and robot is done via ArUco marker tracking placing an virtual robot at the real one. Afterwards a spatial anchor is placed based on information extracted from the robot pose subscriber which subscribes to Adaptive Monte Carlo localization topic thus aligning the two coordinate systems for further visualization. After these initial steps the data visualization and robot movement control can be executed using the build in interactive user interface containing virtual buttons for each sensor type. For robot navigation the user input is published to the navigation topic. Robot movement is tracked to the odometry topic and based on that the virtual robot moves along with the real one.

V Prototype

Fig. 4: Environment Map Visualization

This section is demonstrating the implemented prototype. Different operation modes of sensor data visualization are shown to validate the functionality of the prototype. Fig. 4 shows the visualization of the environment map while Fig. 5 shows the path visualization after placement of a goal position. A goal position is defined by dragging the blue arrow to a location within the room as illustrated in fig. 5. For that the spatial mapping capability of the Hololens is turned on to determine possible placement locations within the room. After goal definition, the location is send to ROS triggering robot movement.

Fig. 5: Navigation Goal Definition and Path Visualization
Fig. 6: Laser scan (green) and environment map (magenta) visualization

Fig.6 is showing the visualization of laser scan data in green. Environment data is shown in red. The path information is visualized once the user defines a destination by dragging the blue arrow to a 3d position in the room. As fig. 6 depicts, the laser scan data shapes nearby objects as well as far out walls, while the environment map (red) surrounding shapes of the whole room.

Vi Experiment

To acquire relevant data for evaluation of performance aspects we defined 3 parameters to evaluate: first, frames per second (fps), second computational power, third, time to execution of robot movement. The above parameters were selected because they give relevant insight in the performance of the application while running the navigation visualization application. Frames per second is relevant because it shows the visual quality of the observed scene and if great amounts of data affect visual quality of the application. To evaluate computational power, we observe the CPU usage. Lastly, time to execution is an indicator of proper and fast data transmission between the two entities. This includes the time from defining the goal on the Hololens to the time when the path is visible to the user on the AR Headset. Our hypothesis is that if the application is demanding to much power, communication will also be slowed down.
To acquire relevant data, we defined certain position with different views in order to get different robot poses to cover all possible laser scan setups. After that, the application is started and for every position, the above parameters are measured. Application CPU usage as well as fps can be accessed through developer Hololens portal. The time for execution of robot movement after the user defines a navigation goal is measured. For each measurement, 5 different visualization modes are used:

  1. Without any sensor data visualization

  2. With laser scan visualization

  3. With environment map visualization

  4. With laser scan and environment map visualization

  5. With laser scan, environment and navigation visualization

The robot is driven to different position and each time the aforementioned parameters are collected and an average value is calculated. This is done a total of 20 times to get meaningful results.

Vii Evaluation

In this section we evaluate the performance aspects of the Hololens for data visualization and communication between ROS and Hololens. We evaluate the performance of Hololens by displaying the different kinds of sensor data each for its own and afterwards simultaneously. Visual quality of the application is evaluated by observing fps. For the overall performance of the Hololens, we evaluate the CPU usage for each visualization mode. To evaluate to what extinct the communication between the two entities is affected, the time to movement execution of the robot after defining a navigation goal on the Hololens is evaluated.

2) (Frames per second)

As an important parameter for the visual performance of the Hololens, the fps was evaluated for different visualization modes. Results show, that fps is pending from 57 to 60 even when displaying laser scan and environment map data. This gives the conclusion that the Hololens is capable of visualizing huge amounts of scene objects without any visual downscale.

2) (CPU usage)

Fig. 7: RAM usage for different visualization modes

The results in fig. 7 show a rise in CPU usage with more data being visualized. Without any visualization on, the CPU usage is pending on 20 to 30 percent. It is seen that CPU usage rises to 40 percent when map data is being visualized. Display of laser scan data results in an increase to an average of 66 percent. CPU usage is highest when displaying Map, laser scan data and defining a navigation goal with an average of 79 percent. This is due to the spatial mapping of the whole environment which is necessary to define an exact 3D position of the room. The data transfer from Hololens to ROS is also demanding computational power. This results in a lag time between navigation goal placement and movement triggering for the robot. To evaluate this time lag the box chart in fig. 8 illustrates the rising time to execution for the different visualization modes.

3) Time to Movement execution

The box chart in fig. 8 illustrates the duration between the user interaction and the robot movement for the different visualization modes. This includes the visualization of the navigation path planing data. Once again laser scan data has great affection on overall performance since the time increases dramatically sometimes also resulting in the crash of application. When no sensor data is visualized, average time for command execution is 0.5s. With mapping data added to display, time drops to 1.5s on average. Laser scan data displayed resulting in a rise to 1.8s and with laser scan and mapping data simultaneously on, to 2.2s. The last mode increases execution time to 2.6s.

Fig. 8: Time to movement execution for different visualization modes

Viii Discussion

The fps throughout all visualization modes was constant and robust showing that the Hololens is capable of visualizing high amounts of game objects. Visualization of environment map do not considerably affect the fps and the CPU usage of the Hololens performance despite containing huge amounts of scene objects to be visualized. Also path information contain few scene objects to be visualized and do not have a great affection on frames per seconds and CPU usage. However, the evaluation have shown that visualization of constantly incoming data clearly affects the overall Hololens performance. This is being proven by evaluation of CPU usage. There, only little increase in CPU usage have been observed when displaying the environment map although containing more scene objects than laser scan data. This is due to the fact that information of the map is only transferred and visualized once whereas laser scan data continuously change. The box plots clearly illustrate the rise in computational demand when displaying laser scan data. In the case of goal placement the overall performance drops even further due to the now additional required computational demands for communicating different information simultaneously. Furthermore, for goal definition spatial mapping is executed to extract the 3D location of the room. An considerably increased time have been observed when using goal definition together with data visualization.

Ix Conclusion

We demonstrated, that the Hololens is capable of visualizing important navigation stack sensor data within a cutting edge AR device. We propose a prototype application which successfully displayed key parameters of navigation stack within the Microsoft Hololens. The latter was evaluated in terms of performance to give relevant insight about the feasibility for integration of an HMD-AR into mobile robotics. Evaluation results show that the Hololens is capable of displaying a great amount of sensor data without any visual downscale. However, it is struggling for real-time data visualization. This is especially the case for visualization of laser scan data since the data flow is continuously changing and producing high amounts of data in parallel. This affects the overall performance of the application which is proved by the fact that goal definition takes considerably longer with laser scan visualization mode on. CPU usage was peaking at 80 percent when all sensor data were displayed simultaneously and navigation goal definition took considerably longer with sensor data on but did not affect the application crucially in terms of accuracy and robustness. Further work must include the optimization of preprocessing of the laserscan to reduce CPU usage.

References

  • [1] In https://docs.opencv.org/3.1.0/d5/dae/_aruco/_detection.html., Cited by: §IV-B.
  • [2] (2017) ArUco integration into the hololens. In https://github.com/KeyMaster-/HoloLensArucoTracking, Cited by: §IV-B.
  • [3] M. Bischoff (2018) ROS sharp. In Ros Sharp Framework, https://github.com/siemens/ros-sharp/wiki, Cited by: §IV-A.
  • [4] J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic (2011) Augmented reality technologies, systems and applications. Multimedia tools and applications 51 (1), pp. 341–377. Cited by: §I.
  • [5] A. Coppens (2017) Merging real and virtual worlds: an analysis of the state of the art and practical evaluation of microsoft hololens. arXiv preprint arXiv:1706.08096. Cited by: §II.
  • [6] H. Fang, S. Ong, and A. Nee (2014) Novel ar-based interface for human-robot interaction and visualization. Advances in Manufacturing 2 (4), pp. 275–288. Cited by: §I.
  • [7] H. Fuchs, M. A. Livingston, R. Raskar, K. Keller, J. R. Crawford, P. Rademacher, S. H. Drake, A. A. Meyer, et al. (1998) Augmented reality visualization for laparoscopic surgery. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 934–943. Cited by: §I, §II.
  • [8] B. Giesler, T. Salb, P. Steinhaus, and R. Dillmann (2004) Using augmented reality to interact with an autonomous mobile platform. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, Vol. 1, pp. 1009–1014. Cited by: §I.
  • [9] S. A. Green, X. Q. Chen, M. Billinghurst, and J. G. Chase (2008) Collaborating with a mobile robot: an augmented reality multimodal interface. IFAC Proceedings Volumes 41 (2), pp. 15595–15600. Cited by: §I.
  • [10] J. Guhl, J. Hügle, and J. Krüger (2018) Enabling human-robot-interaction via virtual and augmented reality in distributed control systems. Procedia CIRP 76, pp. 167–170. Cited by: §II, §III-B.
  • [11] Y. Guo, X. Hu, B. Hu, J. Cheng, M. Zhou, and R. Y. Kwok (2018) Mobile cyber physical systems: current challenges and future networking applications. IEEE Access 6, pp. 12360–12368. Cited by: §I.
  • [12] S. Hashimoto, A. Ishida, M. Inami, and T. Igarashi (2011) Touchme: an augmented reality based remote robot manipulation. In The 21st International Conference on Artificial Reality and Telexistence, Proceedings of ICAT2011, Vol. 2. Cited by: §I, §II.
  • [13] H. Hedayati, M. Walker, and D. Szafir (2018) Improving collocated robot teleoperation with augmented reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 78–86. Cited by: §I.
  • [14] C. Heyer (2010) Human-robot interaction and future industrial robotics applications. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4749–4754. Cited by: §I.
  • [15] D. Krupke, F. Steinicke, P. Lubos, Y. Jonetzko, M. Görner, and J. Zhang (2018) Comparison of multimodal heading and pointing gestures for co-located mixed reality human-robot interaction. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–9. Cited by: §II, §III-B.
  • [16] J. Lambrecht and J. Krüger (2012) Spatial programming for industrial robots based on gestures and augmented reality. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 466–472. Cited by: §I.
  • [17] Y. Liu, H. Dong, L. Zhang, and A. El Saddik (2018) Technical evaluation of hololens for multimedia: a first look. IEEE MultiMedia 25 (4), pp. 8–18. Cited by: §II.
  • [18] A. Monferrer and D. Bonyuet (2002) Cooperative robot teleoperation through virtual reality interfaces. In Proceedings Sixth International Conference on Information Visualisation, pp. 243–248. Cited by: §III.
  • [19] V. Paelke (2014) Augmented reality in the smart factory: supporting workers in an industry 4.0. environment. In Proceedings of the 2014 IEEE emerging technology and factory automation (ETFA), pp. 1–4. Cited by: §I.
  • [20] I. M. Sauer, M. Queisner, P. Tang, S. Moosburner, O. Hoepfner, R. Horner, R. Lohmann, and J. Pratschke (2017) Mixed reality in visceral surgery: development of a suitable workflow and evaluation of intraoperative use-cases. Annals of surgery 266 (5), pp. 706–712. Cited by: §II.
  • [21] R. Siegwart, I. R. Nourbakhsh, D. Scaramuzza, and R. C. Arkin (2011) Introduction to autonomous mobile robots. MIT press. Cited by: §I.
  • [22] M. C. Thorstensen (2017) Visualization of robotic sensor data with augmented reality. Master’s Thesis. Cited by: §II, §III.
  • [23] R. Vassallo, A. Rankin, E. C. Chen, and T. M. Peters (2017) Hologram stability evaluation for microsoft hololens. In Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment, Vol. 10136, pp. 1013614. Cited by: §II.
  • [24] V. Velayutham, D. Fuks, T. Nomi, Y. Kawaguchi, and B. Gayet (2016) 3D visualization reduces operating time when compared to high-definition 2d in laparoscopic liver resection: a case-matched study. Surgical endoscopy 30 (1), pp. 147–153. Cited by: §I, §II.
  • [25] S. Webel, U. Bockholt, T. Engelke, N. Gavish, M. Olbrich, and C. Preusche (2013) An augmented reality training platform for assembly and maintenance skills. Robotics and Autonomous Systems 61 (4), pp. 398–403. Cited by: §II.