Omnipotent Virtual Giant for Remote Human-Swarm Interaction

03/24/2019
by   Inmo Jang, et al.
The University of Manchester
0

This paper proposes an intuitive human-swarm interaction framework inspired by our childhood memory in which we interacted with living ants by changing their positions and environments as if we were omnipotent relative to the ants. In virtual reality, analogously, we can be a super-powered virtual giant who can supervise a swarm of mobile robots in a vast and remote environment by flying over or resizing the world and coordinate them by picking and placing a robot or creating virtual walls. This work implements this idea by using Virtual Reality along with Leap Motion, which is then validated by proof-of-concept experiments using real and virtual mobile robots in mixed reality. We conduct a usability analysis to quantify the effectiveness of the overall system as well as the individual interfaces proposed in this work. The results revealed that the proposed method is intuitive and feasible for interaction with swarm robots, but may require appropriate training for the new end-user interface device.

READ FULL TEXT VIEW PDF

Authors

page 1

page 3

page 4

page 5

08/19/2020

RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots

RoomShift is a room-scale dynamic haptic environment for virtual reality...
02/08/2013

User Interface for Volume Rendering in Virtual Reality Environments

Volume Rendering applications require sophisticated user interaction for...
01/24/2019

Mixed-Granularity Human-Swarm Interaction

We present an augmented reality human-swarm interface that combines two ...
06/16/2020

Mobile Delivery Robots: Mixed Reality-Based Simulation Relying on ROS and Unity 3D

In the context of Intelligent Transportation Systems and the delivery of...
04/04/2015

Preprint Extending Touch-less Interaction on Vision Based Wearable Device

This is the preprint version of our paper on IEEE Virtual Reality Confer...
09/20/2018

Interactive Camera Network Design using a Virtual Reality Interface

Traditional literature on camera network design focuses on constructing ...
09/16/2019

Virtual Reality for Robots

This paper applies the principles of Virtual Reality (VR) to robots, rat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Swarm robotics [1] is one of the promising robotic solutions for complex and dynamic tasks thanks to its inherent system-level robustness from the large cardinality. Swarm robotics researches mostly consider small and individually incapable robots, for example, Kilobots[2] and MONAs[3], but the resultant technologies and knowledge can be also transferable to a swarm of individually capable robots (e.g. legged robots), which will be deployed for important and safety-critical missions in extreme environemtns such as nuclear facility inspection.

Human-Swarm Interaction (HSI) is relatively a new research area that “aims at investigating techniques and methods suitable for interaction and cooperation between humans and robot swarms”[4]. One of the main differences of HSI from typical Human-Robot Interaction (HRI) is that a large number of robots, due to swarm properties, must be involved efficiently, otherwise a human operator can be easily overwhelmed by enormous workload for control and situational awareness. In addition, it is highly expected that swarm robots are controlled by decentralised local decision-making algorithms [5, 6, 7], which generate a desired emergent group behaviour. Therefore, HSI should be synergistic with such self-organised behaviours by having interfaces of not only individual-level teleoperation but also subgroup-level and mission-level interactions. Furthermore, in practice, e.g. in an extreme environment, swarm robots will be deployed to a mission arena beyond the line-of-sight of a human operator. Therefore, considering such possible scenarios, HSI should be differently addressed than typical HRI.

In this paper, we propose an Omnipotent Virtual Giant for HSI, which is a super-powered user avatar interacting with robot swarm via virtual reality, as shown in Fig. 1. This was inspired by our childhood memory in which most of us have played with living ants by relocating their positions and putting obstacles on their paths as if we were omnipotent relative to them. Analogously, through the omnipotent virtual giant, a human operator can directly control individual robots by picking and placing them; can alter virtual environment (e.g. creating virtual walls) to indirectly guide the robots; and can be omniscient by flying around or resizing the virtual world and supervising the entire or a subgroup of the robots. We implement this idea using Leap Motion with Virtual Reality (Sec. III), and validate the proposed HSI framework by using proof-of-concept real-robot experiments and by usability tests (Sec. IV).

Fig. 1: The system architecture of the proposed Human-Swarm Interaction using Omnipotent Virtual Giant

Ii Related Work

This section particularly reviews existing HSI methodologies and their suitability for remote operations. Gesture-based interactions have been popularly studied[8, 4, 9, 10, 11, 12]. A human’s body, arm, or hand gestures are recognised by Kinect [9, 10], electromyography sensors [11, 12], or onboard cameras [8, 4]

, and then translated to corresponding commands to robots. Such gesture-based languages probably require a human operator to memorise mappings from predefined gestures to their intended commands, although some of the gestures may be intuitively used.

Augmented Reality (AR) has been also utilised in [13, 14]. This method generally uses a tablet computer, which, through its rear-view camera, recognises robots and objects in the environment. Using the touchscreen, a user can control the robots shown on the screen, for example, by swipe gestures. In [14], an AR-based method was tested for cooperative transport tasks of multiple robots. However, this type of interface is only available in the proximity environment.

Tangible interactions can be another methodology for certain types of swarm robots. The work in [15] presented tiny tabletop mobile robots, with which a human can interact by actually touching them. By relocating a few of the robots, the entire robots eventually end up with different collective behaviours. This tangible interface inherently does not allow any interfacing error when it comes to changing of a robot’s position. Nevertheless, apart from position modifications, it seems not straightforward to include the other interfaces.

All the aforementioned interfaces require a human operator to be within proximity of robots. Instead, virtual reality (VR)-based interactions can be considered as an alternative for beyond-line-of-sight robotic operations. In a virtual space where a human operator interacts with swarm robots, the operator is able to violate the laws of physics, teleporting [16] or resizing the virtual world (as will be shown in this paper) to observe the situation macroscopically. This may facilitate to perceive and control a large number of robots in a vast and remote environment. However, most of existing VR-based interfaces rely on default hand-held equipment. They would be less intuitive than using our bare hands, but also may cause considerable load on the user’s arms when in use for a longer time.

Iii Methodology: Omnipotent Virtual Giant

In this paper, we proposed a novel HSI framework using omnipotent virtual giant, which is a resizable user avatar who may perceive situations macroscopically in virtual space but also can interact with swarm robots by using its bare hands, e.g. simply picking and placing them. Technically, this concept can be implemented by integrating virtual reality (VR) and Leap Motion (LM). Our proposed method has both advantages of tangible interactions giving intuitiveness as well as VR-based interactions giving remote operability.

Iii-a Preliminary

Iii-A1 Virtual Reality

VR is considered as one of the suitable user interfaces to interact with remote swarm robots[16]. On top of its advantages described in the previous section, VR being the main interface device can provide practical efficiency in research and development (R&D) process. In general, developing user interfaces requires enormous human trials and feedback update process via numerous beta tests. This process can be accelerated, if VR is in use, by using simulated swarm robots in the initial phase of R&D, which is a very important period to explore various design options within a relatively short time. For example, for real swarm robotic tests, it may take elongated time to prepare such a large number of robots (e.g. charging batteries), which can be avoidable when simulated robots are instead in use. In addition, by using robot simulators (e.g. Gazebo, V-Rep, ARGoS[17]) along with communication protocols such as rosbridge and ROS#, we can construct mixed reality[16, 18], where real robots and simulated robots coexist, and then perform a hardware-in-the-loop test with the reduced R&D resources (e.g. human power, time, and cost). Obviously, the final phase of R&D should involve proper real robot tests in fields, however, thanks to VR, unnecessary efforts can be reduced over the whole development period.

Iii-A2 Leap Motion

LM is a vision-based hand motion capture sensor. Recently, performance of LM has been significantly improved in the latest SDK called Orion111Watch this comparison video: https://youtu.be/7HnfG0a6Gfg. Particularly, when it is used along with Unity222https://unity3d.com/, we can exploit useful modules (e.g. Leap Motion Interaction Engine) that facilitate to interact with virtual objects using bare hands without any hand-held equipment. In our previous work [19], hands sensed by LM are reasonably accurate and much more natural to use compared with the use of hand-held devices.

Iii-B System Overview

The architecture of the proposed HSI framework, as illustrated in Fig. 1, consists of the following subsystems:

  • Mobile robots: Swarm robots are deployed to a remote mission area. The robots are assumed to have capabilities of decentralised decision making[5, 6, 7], navigation and control (e.g. path planning, collision avoidance, low-level control, etc.) [20], remote inspection[21], manipulation[22], and inter-agent communication. They behave autonomously based on their local information and interaction with their neighbouring robots.

  • Data collection from the robots and visualisation: The status of the robots and the environments where they are inspecting are transmitted to the master control station, where this information is assumed to be dynamically rendered in virtual reality. This communication may happen in a multi-hop fashion since the network topology of the robots is not likely to be fully connected.

  • Interactions via an omnipotent virtual giant: A user wearing a VR head-mounted display can perceive the remote situation through the virtual reality. The user’s bare hands are tracked by the LM attached on the outer surface of the VR goggle, and then rendered as the hands of the avatar in the virtual space. The user avatar is resizeble to become a giant or flyable around to oversee the overall situation. The user can interact with the robots by touching them in the virtual space. The details of the user interfaces currently implemented will be described in Sec. III-C.

  • User input transmission to the robots: When an interaction happens in the virtual space, corresponding user inputs are sent to the real robots, and they react accordingly.

This work mainly focuses on the user interaction part of the system. It is assumed that all the other subsystems are provided, which are beyond the scope of this paper.

Iii-C Proposed User Interfaces

This section describes user interfaces we propose in this work. The main hand gestures used are as follows:

  • Pinching: This gesture is activated when the thumb and index finger tips of a hand are spatially close as shown in Fig. 2(a). PinchDetector in LM SDK facilitates this gesture.

  • Closing hand: This is triggered when all the five fingers are fully closed as in Fig. 2(b). When this is done, the variable GrabStrength in the class Leap::Hand of the SDK becomes one.

  • Grasping: This will begin if a thumb and index finger are both in contact with an object. If this is initiated, the object can be grasped. One example is shown in Fig. 3. This gesture can be implemented via Leap Motion Interaction Engine.

  • Touching: Using an index finger, virtual buttons can be pushed as in Fig. 4(a).

Combination of the gestures is used for perception or control for swarm robots.

Iii-C1 Perception interfaces

Given robot swarm spread in a vast arena, capabilities of overall situation awareness as well as robot-level perception are crucial for HSI. To this end, this paper proposes the following two interfaces: Resizing the world and Flying.

Resizing the world: When two hands pinching are spread out or drawn together as shown in Fig. 2(a), the virtual world is scaled up or down, respectively. Meanwhile, the size of the user avatar remains unchanged. In other words, the user avatar can be a virtual giant to oversee the situation macroscopically (Fig. 2(d)) or become as small as an individual robots to scrutinize a specific area (Fig. 2(c)).

Flying like Superman:

The user avatar basically hovers the virtual world, not being under gravity. Furthermore, it can even fly towards any direction by closing two hands and slightly stretching out the arms towards the same direction. With respect to the middle point of the two hands starting the closing-hand gesture, the relative vector of the current middle point is used as the user’s intended flying direction.

(a)
(b)
(c)
(d)
Fig. 2: Perception interfaces: (a) resizing the world; (b) flying like Superman. Using the interfaces, a user can have (c) an ordinary perception, or (d) macroscopic perception for which the avatar becomes a virtual giant. In (c) and (d), the white oval object indicates the avatar’s head, and the upper-righthand subfigures show the user’s view.

Iii-C2 Control Interfaces

User interactions to guide and control multiple robots can be summarised as the following four categories[10, 14, 11]: robot-oriented; swarm-oriented; mission-oriented; and environment-oriented

. In robot-oriented interaction, a human operator overrides an individual robot’s autonomy, giving an explicit direct command, e.g. teleoperation. Swarm-oriented interaction uses a set of simplified degrees of freedom to control swarm robots, for example, controlling a leader robot followed by some of the other robots. In mission-oriented interaction, a human user provides a mission statement or plan to swarm robots as a higher-level interaction. For swarm or mission-oriented interactions, collective autonomy or swarm intelligence takes a crucial part to achieve the desired emergent behaviour. Environment-oriented interaction does not affect the autonomy of any single robot, instead modifies the environments which the robots interact with, for example, by giving artificial pheromones

[23].

In this work, we present one interface per interaction mode except mission-oriented one, which are as follows: Pick-and-Place a Robot (for robot-oriented interaction), Multi-robot Controlling Cube (for swarm-oriented one), and Virtual Wall (for environment-oriented one).

Pick-and-Place a Robot: When the user avatar grasps a mobile robot, the robot’s holographic body, which is its target-to-go object, is picked up and detached from the robot object, as shown in Fig. 3. Once the target-to-go object is relocated to any position, then the robot moves towards it while neglecting its existing autonomy.

Multi-robot Control Cube: The user can have a small hand-held menu by rotating the left-hand palm to face up, as shown in Fig. 4(a). On the top, there is a pickable cube, which can serve as a virtual guided point of multi-robot coordination, e.g. the virtual centre of a rotating formation control[25]. In this work, this formation control is activated once the cube is placed on the floor.

Virtual Wall: In the hand-held menu shown in Fig. 4(a), there are two buttons: Draw Wall and Undo Wall. By touching the former, the drawing wall mode is toggled on, then a red-coloured sign appears on the VR display. In the mode, a pinching gesture creates a linear virtual wall, as shown in Fig. 5. Such a wall, to which any robot in reality cannot penetrate, indirectly guides the robot’s path or confines the robot within a certain area. Each wall can be cleared out if the undo wall button is pushed in a last-come-first-served manner. For its implementation with consideration of reducing communication costs towards real robots, we set that once a wall is created, only its two end positions are broadcasted. Then, the robots compute additional intermediate points depending on their collision avoidance radii, and do collision avoidance behaviours against all the points.

Fig. 3: Robot-oriented interface: picking and placing a robot
(a)
(b)
Fig. 4: (a) The hand-held menu for switching interaction modes; (b) Swarm-oriented interface: multi-robot control cube
Fig. 5: Environment-oriented interface: creating a virtual wall

Iv Experimental Analysis

Iv-a Experimental Validation using Mixed Reality

Proof-of-concept experiments to validate the proposed HSI framework use a mixed reality environment where three real MONA robots[3] (whose height and diameter is 40 and 74 mm, respectively) and six virtual robots are moving around a cm of arena. In the experiments, the robots basically do random walking unless a human operator intervenes their behaviours. The robots are capable of simple collision avoidance against any of virtual (or real) walls and robots. Their localisation relies on a low-cost USB camera-based tracking system[24], which obtains the planar positions and heading angles of the real robots in the arena and sends the information to the master control computer. The master system consists of a computer executing the implemented Unity application on Windows 10, which renders the virtual world, and another computer running ROS on Ubuntu 16.04, which sends user inputs to the real robots via an antenna.

An experimental demonstration for the pick-and-place interface is presented in Fig. 6. In the virtual reality, once the target-to-go object of a robot was picked up and placed as in Fig. 6(a), the robot moved towards the destination very well. Fig. 7 shows another demonstration for the multi-robot control cube interface. In this test, only the real robots were used and the distributed rotating formation control algorithm in [25] was implemented into each of the robots. Once a human operator placed down the cube object, the robots started to rotate around it. As soon as the cube was relocated as in Fig. 7(a), their formation was also changed accordingly as in Fig. 7(b). A demonstration for the virtual wall interface is shown in Fig. 8. Regardless of whether robots are real or virtual, their behaviours were restricted by the virtual walls created by the user. All the demonstrations were recorded and can be found in the supplementary material.

(a)
(b)
Fig. 6: Experimental validation of the pick-and-place interface: (a) once the holographic objects of robots are relocated, (b) the real robots move towards them. The left subfigures show the real robots, and the right subfigures show their visualisation in the virtual space and the other virtual robots. The dashed arrows indicate the remaining journey to the target objects.
(a)
(b)
Fig. 7: Experimental validation of the multi-robot control cube interface for rotating formation control: (a) once the purple cube object is placed down, (b) the robots forms a circular formation with regard to the cube’s position.
Fig. 8: Experimental validation of the virtual wall interface: due to the virtual walls (i.e. the green linear objects in the right subfigure), the real or virtual robots are confined within certain spaces.

Iv-B Usability Study

We conducted a usability analysis to study i) how much the proposed HSI is useful to interact with swarm robots; and ii) how it is effective to be given multiple types of user interfaces.

Iv-B1 Mission Scenario

The swarm robotic mission that was designed for the usability study is a multi-robot task allocation. The objective of the mission is to distribute 50 virtual mobile robots over the three task areas according to their demands (i.e. 25, 15, and 10 robots, respectively), as shown in Fig. 9. The local behaviour of each robot was designed to move forward until it faces an obstacle, then the robot performs a collision avoidance routine by rotating randomly. The simplified intelligence would require a human operator’s intervention to address the given mission efficiently. For this test, all the perception and control interfaces in Sec. III-C were used except the one for formation control.

Iv-B2 Experimental Setup

We recruited 10 participants aged between 20 and 35 from the engineering discipline. Half of them had a little experience of VR, and the other half had no experience at all. Since all of them had never used LM before, they were given a five-minute trial of an introductory application called Blocks333https://gallery.leapmotion.com/blocks/ before starting the main test. Hence, the mission scenario and the user interfaces they can use were explained.

Each participant was provided two strategies to address the mission. In Strategy 1, for controlling the robots, the pick-and-place interface was only allowed to use. In Strategy 2, the participants can also use the virtual wall interface. We explained to them that virtual walls are supposed to block any task arena that already has the required number of robots to prevent it from including any redundant robots. Otherwise, the possible results would be affected by the individuals’ preferred approaches to address the mission. All the perception interfaces were used for both strategies.

Each participant performed the mission using the two strategies respectively, for each of which, two trials were given. The trial minimising the completion time was chosen as his/her best performance, and the completion time and the number of interactions they used were recorded. The participants were also asked to fill a Likert scale survey form to quantify their experience on the individual interfaces as well as the overall system.

Fig. 9: The mission arena for the usability study: each participant has to allocate 50 mobile robots according to the task demands (i.e. 25, 15, and 10 for Task 1, 2, and 3, respectively). The white oval shape represents the user avatar at the time when the mission starts.

Iv-B3 Results and Discussion

Strategy 1 (PP) Strategy 2(PP+VW)
Ave Std Ave Std
Completion time (sec) 269.9 60.8 312.8 70.6
The number of interactions 27.5 6.8 21.4 6.9
PP: Pick-and-Place interface; VW: Virtual Wall interface
TABLE I: Average performance
Fig. 10: Qualitative Comparative Result of HSI

Table I shows that Strategy 1 (i.e. the pick-and-place interface only in use) averagely requires less time (i.e. 43 sec less) but more interactions (i.e. 6.1 interactions more), compared with Strategy 2 (i.e. virtual walls also in use). This indicates that using environment-oriented controls can reduce the needs of explicit one-by-one guidance towards individual robots, ending up with reduction in the total number of interactions. On the contrary, the increase in the completion time implies that a user may be confused with multiple modalities, especially, when the interfaces are similar to each other. Even this result was also the case for experienced users (i.e., the developers of the proposed system) because toggling on and off the drawing wall mode increases a mission completion time.

Fig. 10 presents the user experience result of the proposed system. The average answers for Q3 imply that users may need to get more trained to use LM. In fact, during the test, it was often observed that the participants unconsciously stretched out their hands out of LM’s sensing range. The result can be considered obvious due to the fact that the end-user interface with VR and LM was definitely unfamiliar to the participants. The answers for Q6 indicate that the resizing world interface relatively needs more training, whereas the virtual wall interface is easier to use. In contrast, the virtual wall interface was selected as the most confusing one, as in the results for Q4. This seems to be relevant to the increased completion time in Stratege 2 as in Table I, because the pinching gesture is used to create virtual walls as well as to resize the world, but in different toggle modes, respectively.

However, it was mostly agreed that the proposed HSI framework would be useful for interaction with swarm robots, as in the result for Q5. Obviously, the pick-and-place interface is the most fun and intuitive according to the results for Q1 and Q2.

V Conclusions

This paper proposed an intuitive human-swarm interaction framework using a super-powered user avatar who can supervise and interact with a swarm of mobile robots in virtual reality, which is implemented by VR and Leap Motion. This work presented two perception interfaces, by which a user can resize the virtual world or fly around the scene, and three control interfaces for robot-oriented, swarm-oriented, and environment-oriented interactions, respectively. We conducted proof-of-concept experiments to validate the proposed HSI framework by using three real robots, MONA, and six virtual ones in a mixed reality environment. A usability study for a multi-robot task allocation mission was used to evaluate the proposed framework. The results presented that the proposed system can be considered as suitable for swarm robots in a vast and remote environment, and that the individual interfaces using bare hands are intuitive. It was also shown that multiple modalities can reduce the number of human intervention, but may increase a mission completion time, especially if users are not trained enough, due to its inherent complexity.

For real world application, the communication capability of swarm robots to a human operator will be one of the big challenges. Considering any possible practical network topology, the larger number will impose huge communication load on the near-end robots as well as cause bottleneck effects on the information flow. Eventually, this will lead to a latency of remote visualisation for the operator. Therefore, the near-end robots or any robots in the middle may need to make decisions in terms of which information from which robots needs to be priorly transferred in order to maximise the operator’s perception, while reducing communication load imposed on the near-end robots.

References

  • [1] H. Hamann, Swarm Robotics: A Formal Approach.   Springer, 2018.
  • [2] V. Trianni, M. Trabattoni, A. Antoun, E. Hocquard, B. Wiandt, G. Valentini, M. Dorigo, and Y. Tamura, “Kilogrid: a novel experimental environment for the Kilobot robot,” Swarm Intelligence, vol. 12, no. 3, pp. 245–266, 2018.
  • [3] F. Arvin, J. Espinosa, B. Bird, A. West, S. Watson, and B. Lennox, “Mona: an affordable open-source mobile robot for education and research,” Journal of Intelligent and Robotic Systems, pp. 1–15, 2018.
  • [4] J. Nagi, H. Ngo, L. M. Gambardella, and G. A. Di Caro, “Wisdom of the swarm for cooperative decision-making in human-swarm interaction,” in IEEE Intl. Conf. on Robotics and Automation, 2015, pp. 1802–1808.
  • [5] H. L. Choi, L. Brunet, and J. P. How, “Consensus-based decentralized auctions for robust task allocation,” IEEE Transactions on Robotics, vol. 25, no. 4, pp. 912–926, 2009.
  • [6] I. Jang, H.-S. Shin, and A. Tsourdos, “Anonymous hedonic game for task allocation in a large-scale multiple agent system,” IEEE Transactions on Robotics, vol. 34, no. 6, pp. 1534–1548, 2018.
  • [7] I. Jang, H.-S. Shin, and A. Tsourdos, “Local information-based control for probabilistic swarm distribution guidance,” Swarm Intelligence, vol. 12, no. 4, pp. 327–359, 2018.
  • [8] J. Nagi, A. Giusti, L. M. Gambardella, and G. A. Di Caro, “Human-swarm interaction using spatial gestures,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2014, pp. 3834–3841.
  • [9] G. Podevijn, R. O’Grady, Y. S. G. Nashed, and M. Dorigo, “Gesturing at Subswarms: Towards Direct Human Control of Robot Swarms,” in Towards Autonomous Robotic Systems 2013. LNCS, pp. 390–403.
  • [10] J. Alonso-Mora, S. Haegeli Lohaus, P. Leemann, R. Siegwart, and P. Beardsley, “Gesture based human-multi-robot swarm interaction and its application to an interactive display,” in IEEE Intl. Conf. on Robotics and Automation, 2015, pp. 5948–5953.
  • [11] A. Stoica, T. Theodoridis, H. Hu, K. McDonald-Maier, and D. F. Barrero, “Towards human-friendly efficient control of multi-robot teams,” in Intl. Conf. on Collaboration Technologies and Systems, 2013, pp. 226–231.
  • [12] B. Gromov, L. M. Gambardella, and G. A. Di Caro, “Wearable multi-modal interface for human multi-robot interaction,” in Intl. Symp. on Safety, Security and Rescue Robotics, 2016, pp. 240–245.
  • [13] J. A. Frank, S. P. Krishnamoorthy, and V. Kapila, “Toward mobile mixed-reality interaction with multi-robot systems,” IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 1901–1908, 2017.
  • [14] J. Patel, Y. Xu, and C. Pinciroli, “Mixed-Granularity Human-Swarm Interaction,” in Intl. Conf. on Robotics Automation 2019 (in press). [Online]. Available: http://arxiv.org/abs/1901.08522
  • [15] M. Le Goc, L. H. Kim, A. Parsaei, J.-D. Fekete, P. Dragicevic, and S. Follmer, “Zooids: building blocks for swarm user interfaces,” in Proc. of the 29th Annual Symp. on User Interface Software and Technology, 2016, pp. 97–109.
  • [16] J. J. Roldán, E. Peña-Tapia, D. Garzón-Ramos, J. de León, M. Garzón, J. del Cerro, and A. Barrientos, “Multi-robot Systems, virtual reality and ROS: developing a new generation of operator interfaces,” in Robot Operating System (ROS), Studies in Computational Intelligence.   Springer International Publishing, 2019, vol. 778, pp. 29–64.
  • [17] C. Pinciroli, V. Trianni, and R. O’Grady, “ARGoS: a modular, multi-engine simulator for heterogeneous swarm robotics,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2011, pp. 5027–5034.
  • [18] D. Whitney, E. Rosen, D. Ullman, E. Phillips, and S. Tellex, “ROS reality: a virtual reality framework using consumer-grade hardware for ROS-enabled robots,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2018.
  • [19] I. Jang, J. Carrasco, A. Weightman, and B. Lennox, “Intuitive bare-hand teleoperation of a robotic manipulator using virtual reality and leap motion,” in Towards Autonomous Robotic Systems 2019 (submitted).
  • [20] I. Jang, H.-S. Shin, A. Tsourdos, J. Jeong, S. Kim, and J. Suk, “An integrated decision-making framework of a heterogeneous aerial robotic swarm for cooperative tasks with minimum requirements,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 2018.
  • [21] B. Bird, A. Griffiths, H. Martin, E. Codres, J. Jones, A. Stancu, B. Lennox, S. Watson, and X. Poteau, “Radiological monitoring of nuclear facilities: using the continuous autonomous radiation monitoring assistance robot,” IEEE Robotics & Automation Magazine, 2018.
  • [22] A. Martinoli, K. Easton, and W. Agassounon, “Modeling swarm robotic systems: A case study in collaborative distributed manipulation,” The International Journal of Robotics Research, vol. 23, no. 4-5, pp. 415–436, 2004.
  • [23] F. Arvin, T. Krajnik, A. E. Turgut, and S. Yue, “COS: Artificial pheromone system for robotic swarms research,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems, 2015, pp. 407–412.
  • [24] T. Krajník, M. Nitsche, J. Faigl, P. Vaněk, M. Saska, L.  Přeučil, T. Duckett, and M. Mejail, “A Practical Multirobot Localization System,” Journal of Intelligent & Robotic Systems, vol. 76, no. 3-4, pp. 539–562, 2014.
  • [25] J. Hu and A. Lanzon, “An innovative tri-rotor drone and associated distributed aerial drone swarm control,” Robotics and Autonomous Systems, vol. 103, pp. 162–174, 2018.