Log In Sign Up

Towards Bi-Directional Communication in Human-Swarm Teaming: A Survey

by   Aya Hussein, et al.

Swarm systems consist of large numbers of robots that collaborate autonomously. With an appropriate level of human control, swarm systems could be applied in a variety of contexts ranging from search-and-rescue situations to Cyber defence. The two decision making cycles of swarms and humans operate on two different time-scales, where the former is normally orders of magnitude faster than the latter. Closing the loop at the intersection of these two cycles will create fast and adaptive human-swarm teaming networks. This paper brings desperate pieces of the ground work in this research area together to review this multidisciplinary literature. We conclude with a framework to synthesize the findings and summarize the multi-modal indicators needed for closed-loop human-swarm adaptive systems.


A Neuro-inspired Theory of Joint Human-Swarm Interaction

Human-swarm interaction (HSI) is an active research challenge in the rea...

Socially Inspired Communication in Swarm Robotics

Localized communication in swarms has been shown to increase swarm effec...

Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent Teaming

Research in multi-agent teaming has increased substantially over recent ...

Swarm Analytics: Designing Information Markers to Characterise Swarm Systems in Shepherding Contexts

Contemporary swarm indicators are often used in isolation, focused on ex...

Exploiting Swarm Aesthetics in Sound Art

As robots move from our imagination into our lives and with modern advan...

Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

Human-swarm interaction (HSI) involves a number of human factors impacti...

On the throughput of the common target area for robotic swarm strategies – extended version

A robotic swarm may encounter traffic congestion when many robots simult...

I Introduction

Human-machine teaming (HMT) involves concurrent interactions among humans and machines. A swarm is a group of distributed machines able to self-organize and generate group-level emergent behaviors from decentralized local interactions. Human-swarm teaming (HST) extends HMT to interactions with a swarm. We will reserve the concept of Human-swarm interaction (HSI) to situations where emphasis is required for the interaction dimension, while HMT will be used when emphasis is required for the teaming dimension, and HST where the context necessitates both swarm and teaming interactions.

HMT operates within a context defined by a mission. For clarity, a mission is defined by a set of objectives to be pursued by both humans and machines. Each of these objectives can be optimized through the completion of a number of tasks, which can be recursively divided into sub-tasks, and potentially could be adaptively determined while pursuing mission’s objectives, while allowing for the team to negotiate plans adaptively and through their interactions. Within the context of this paper, we will assume that the overall mission has fixed and definitive objectives. Examples of these fixed objectives is to save the maximum number of lives in a search and rescue scenario or to maximize the area coverage by a swarm of air vehicles surveying a mine site. These objectives are on the HMT level.

Dyer [1] offers the basic ontological constituents of a ‘team’ as “social members”, “task interdependency”, and “shared goals”. Subsequent literature [2, 3] added “adaptive interaction” and “commitment” from different members in the team with distinctive skills towards performance improvement and accountability. Teaming relies on teamwork skills such as clarifying interdependencies, establishing trust, and finding out means for coordination [4]. Castellan [5] discussed the functional requirements for team members as clearly defined roles and responsibilities, task-related knowledge, and interdependent connection between one another. These three dimensions of a team can be used to distinguish teams from swarms in which members are homogeneous regarding expertise, roles, and responsibilities. The dynamic concept of teaming involves coordination and collaboration activities with flexible team structures.

Bringing together the concept of “teaming”

in HST raises a number of scientific challenges. Some of the challenges rest on the design of appropriate artificial intelligence algorithms to allow the swarm to be smart enough to collaborate with the human. Some challenges are epistemological in nature and call for a better understanding of distributed cognition and the form of distributed situation awareness within a swarm.

This paper focuses, however, on a third form of challenges; the bidirectional communication that needs to take place between the humans and the swarms and the artificial intelligence agents that need to adapt and orchestrate this interaction. This third challenge sits at the core of the first two. Without bidirectional communication, the human-swarm teams will fall short in their abilities to effectively and efficiently collaborate. Without smart agents for bi-directional communication, the swarm and humans will find it difficult to collaborate and/or coordinate actions. Even in the simplest HST systems, a basic form of these agents are needed and may take different forms, from a pre-programmed graphical user interface to human-friendly natural communication mediums such as voice and gesture.

In section II we discuss the concept of HMT and its properties, then we review possible autonomy configurations for HST, discussing risks and remedies. In section III, we distill five groups of indicators that are needed at the human-swarm interface, namely mission performance, interaction, mission complexity, swarm automation, and human states indicators. These indicators are complementary to each other and can reflect the states of human, swarm, interaction, and mission effectiveness. In section IV we synthesize together these five groups of indicators into the MICAH framework, representing a synthesis of the literature to enable flexible and intelligent adaptive control. The closed-loop adaptive system based on MICAH for human-swarm teaming is then discussed. The paper concludes with the challenges and open research questions towards achieving effective HST in section V.

Ii Human-Swarm Teaming

The field of human-robot interaction (HRI) has offered a wide range of approaches towards effective and efficient control interfaces among a group of multiple agents. Extending HRI to HMT, decisions get jointly made by both the human operator(s) and machines. When the teaming relationship evolves to shared control tasks involving mixed-initiatives, it becomes pertinent to develop natural and seamless bi-directional communication between the humans and the machines. Such a requirement comes with a few challenges including how to dynamically adjust the level of autonomy of different players, how to strengthen mutual trust, and what mechanisms are required to facilitate situational awareness (SA) of team members [6].

In this paper, we investigate the teaming of human operators with swarms of multiple agents, or HST for short. In particular, the paper will focus on HST contexts where humans and swarms need to work together smartly and communicate sufficient information to achieve a stable bi-directional communication for shared situation awareness, shared mental models, team mutual predictability, and adaptability. At the interface of this bi-directional communication sits a smart agent that needs to manage the states of different entities in the system. We will first start with the criteria needed to create and effective team.

Ii-a Human-Machine Teaming: Criteria for Effective Teaming

We can borrow some of the key factors for effective HMT from social science research conducted on effective human teams [7]. Examples of these factors include shared cognition, team training, and collective orientation. Shared cognition includes shared mental models, team situation awareness, and comprehensive communication. Team training promotes teamwork and enhances team performance. Developing a collective orientation unifies team coherence and intent.

Similar factors were found in HMT studies including Mutual predictability, shared understanding, and adaptability to other team members [8]. These key properties of a HMT system facilitate the coordination and cooperation of machines with human teammates. Both parties have to recognize their teammate’s actions to infer their intentions corresponding to the context. To develop such a capability on the machine side, it requires sociocognitive mechanisms that support the interpretation of mental processes of the partners during interactions to improve the effectiveness and efficiency of dynamic planning and behavioral adaptation to achieve shared goals [9].

Sycara and Sukthankar [10] identified that information exchange, communication, supporting behaviour, and team leadership play vital roles in effective HMT. More recent literature expanded the list of factors for an effective HMT to seven: Belief in concept of a team, communication, leadership, performance monitoring and feedback, coordination and interdependence, situational awareness, awareness of individual’s unique cognitive model and shared cognitive model [11].

Ii-B Autonomy in Human-Swarm Teaming

In HST, the human and the swarm are assigned complementary roles with the aim of combining their skills efficiently and in a manner that achieves mission success and an efficient overall team performance. Fixing the level of autonomy within the swarm produces a rigid system with little smartness. Such setting can overload the human as the human needs to fill the gap between task requirements and the swarm fixed design. This applies regardless of the level of autonomy the swarm exhibits. If the level of autonomy is low, the human carries most of the load with the end result finding the human overloaded. If the level of swarm autonomy is high, the human could become under-loaded. Both situations are undesirable and can lead to difficulties in sustaining human situational awareness (SA) and a drop in human and system-level performance [12] and engagement level. Fixed autonomy has been criticized for its negative consequences on cognitive load, SA, and performance. Furthermore, fixed autonomy has been associated with human’s complacency and skill degradation [13]. For example, in a complacency state, human might not be able to detect automation failures in multi-tasking environments [14].

Flexible autonomy begs for a degree of smartness. Chen et al. [15]classified flexible autonomy approaches into three classes: adjustable, adaptive, and mixed-initiative, based on whether the adaptation decision is taken by the human, the software, or both.

In adjustable autonomy the human initiates the joint human-system task adaptivity [6], [16]. One of the disadvantages associated with adjustable autonomy is that task delegation by human could be very time consuming and potentially unsafe in situations where the human operator is naturally overload because of task demands [6]. Although adjustable autonomy could be improved using feedback through the human-machine interface, belated decisions made by a human could decrease the overall performance of the mission.

Task delegation in a human’s hands is also subjective and depends on human factors that reach beyond the realm of workload (e.g. emotional stress). Some operators would place more constraints on the automation at the expense of time, while others might place less constraints on automation at the expense that the automation’s behavior can diverge from the operator’s intent [17]. In addition, the management of task allocation by humans is vulnerable to automation-induced complacency. It is worth noting that the human operator is less likely to detect automation failure in constant-reliability conditions [14]. High workload and fatigue could also contribute to the automation-induced complacency. If the type of task delegation is adjustable in human-swarm teaming, and the swarm has an initial constant successful performance, the human is unlikely to detect the failure of the swarm afterwards due to the complacency state.

In adaptive autonomy, the software decides on whether the level of autonomy should be risen or lowered based on some input indicator(s). The accuracy of this decision is important to avoid sudden changes that can be inappropriate or annoying [15]. Results from different studies indicate that many factors can determine the appropriate level of autonomy. Abbass et al. [18] used task complexity and human workload to adapt the level of autonomy in air traffic control tasks. Rusnock et al. [19] studied the use of different workload thresholds in adaptive autonomy. They found that the proper threshold depends on both the human and the task such that in some tasks, increasing this threshold results in an increase in both workload and SA, while in others, increasing the threshold results in an increase in workload without benefiting SA. Feigh et al. [20] proposed a taxonomy of triggers that can be used for adaptation. They categorized the triggers into five categories: operator, system, environment, task/mission, and spatio-temporal triggers.

The third class of flexible autonomy is mixed-initiative systems in which the adaptation decision is taken collaboratively including input from both the human and the software agent. These systems combine the ideas of both adaptable and adaptive autonomy as they allow the human to intervene with the adaptation decision taken by the agent [15].

To generalize the review of this paper, we will focus on mix-initiatives, where both humans and machines control the level of autonomy; thus, both need to be smart enough to recognize their mutual states to make timely actions on when and how to adjust the level of autonomy.

Our work focuses on triggers that can be used for adaptation either in adaptive autonomy or mixed-initiative systems. This is similar to Feigh et al. [20], in that both works provide taxonomy of triggers. However, the difference between this paper and Feigh et al. one is fourfold. First, this paper focuses on how each of the adaptation triggers can be quantified using synthesized indicators from the literature. Second, our main interest is HST; thus, this paper extends Feigh et al. work using a swarm-lens. Third, we discuss how HSI indicators can be included in the adaptation decision to ensure that the selected autonomy configuration results in effective teaming between the human and the swarm, i.e. to ensure that human interventions are constructive and of added value to the performance. Finally, mission-level performance indicators are included with the aim of ensuring that the benefits of the autonomy configuration translate to mission-level improvements.

Iii Indicators for Adaptive HST Systems

Iii-a Mission Performance

Automation equips an automaton with functions to process and/or execute tasks. The level of automation, therefore, represents an agent’s capacity to perform a task, while autonomy expresses “the freedom to make decisions” [21] afforded by the opportunity that exists to allow an agent to act. Autonomy carries negative risks when the capacity of an agent, that is, automation, is conceptually less than the capacity required to perform a task given an opportunity within a mission. HMT brings humans as biological autonomous systems with autonomous machines to work together to optimize mission objectives.

The primary aim of the team composed by human and swarm is to perform the mission successfully. It is therefore of extreme importance to allow the team to monitor progress towards the mission objective(s) in order to take corrective actions and/or adapt accordingly. In this section, we review the literature from the lens of indicators for mission success. We distinguish between how to measure effectiveness (achieving mission success) and efficiency (achieving the success using minimum resources/time) of the system performance in human-swarm teaming. A comprehensive set of metrics can only be defined in terms of the specific tasks composing the mission at hand. Therefore, we will review some examples of mission types from the literature and the metrics that have been defined to evaluate them.

Iii-A1 Mission Performance in HRI

Jacoff et al. [22] [23] under the umbrella of the National Institute of Standard and Technology (NIST) proposed a list of quantitative and qualitative metrics to evaluate the performance of autonomous ground vehicles in a search-rescue mission (e.g. number of victims localized, number of obstacles found, and recovery rate).

Howard et al. [24] proposed a deployment approach to achieve broad area coverage. They experimented with 100 robots which get repelled by other robots and obstacles, thus spreading through the entire environment. A different approach is given Ganguli [25] where each robot can sense distances to the environmental boundary and other robots, and then get deployed to cover the entire environment. The coverage area is a metric that is used in these situations to measure mission success.

Olsen and Goodrich [26] suggest to differentiate between overall mission effectiveness and current task effectiveness. Crandall and Cummings [27] evaluate mission performance using two factors: the number of objects collected and the number of robots remaining. Efficiency is measured in terms of time using two questions: how much time it takes to achieve the mission? and the average time to complete all subtasks.

The operator is almost absent in the metrics discussed above. To overcome this problem, Scholtz [28] designed the below operator-centric indicators and called them critical incidents:

  • Global Navigation: The operator needs to have a view about the area robots are working;

  • Local Navigation: The operator needs to know the environmental factors close to the robots in order to avoid some mistakes during interaction with robots hazards like doorways or trees;

  • Obstacle Encounter: The robots need to deal with obstacles while moving to the goal;

  • Victim Identification: The operator has to identify a victim. In some cases, because of the inaccurate sensor data, this may lead to the operator misidentifying a victim; and

  • Vehicle State: Informing the device status (e.g. battery, broken sensors) of robots to the operator. If this information is correctly provided, the operator may be able to pass these challenges to achieve the mission.

Based on these critical incidents, Steinfeld et al. [29] define five key measures of effectiveness:

  • Percentage of navigation tasks successfully completed

  • Coverage of area

  • Deviation from planned route

  • Obstacles that were successfully avoided

  • Obstacles that were not avoided, but could be overcome

Iii-A2 Mission Performance in HSI

HRI metrics could be transferred to human-swarm teaming. Nevertheless, HST comes with distinct properties as discussed by Hayes and Adams [30].

Nunnally et al. [31] show that mission effectiveness and efficiency metrics degrade when information on swarm is lacking. Similar to HRI, some factors are individuated such as the mission completion percentage, or the completion time. Harriott et al. [32] suggest the use of resource depletion as an objective measure. Here, resource depletion is used to quantify the irreversible consumption of limited resources by members of the swarm. Manning et al. [33] also rely on resource depletion as a metric and extend the concept with timing to capture mutual delay time, which affects the behaviour of the entire swarm. Two other metrics discussed in the research are Micro-level Movements and Macro-Level Movements. Level of overlap in neighborhoods is used for the first measure, and the elongation, which represents the rectangular structure of the swarm, is used for the second.

The indicators discussed above are distilled to form the tree presented in Fig 1, where both measures of effectiveness and measures of efficiency form the two dimensions to measure Mission Performance.

Fig. 1: Examples of the indicators useful for the evaluation of Mission Performance in different mission conditions, and their relations showed as a tree-graph.

Iii-B Interaction Indicators

The interaction between human and swarm refers to the communication approach and the control methods which allow for exchange of their intent information and actions. Factors that influence the interaction include level of autonomy, input timing, and neglect benevolence. A major challenge in HSI is the escalating complexity that could result from an increase swarm size and task demands.

As the size of the swarm increases, the human has to monitor and control a larger group with massive number of interactions. For example, the human ability to control the swarm in a supervisor control task would severely be limited with the limited cognitive capacity of human operators [34]. Unfolding indicators for the effectiveness and efficiency of the interaction is important as both, a detection tool for when more or less automation is needed and as a diagnostic tool to understand the success or otherwise of the team. The remainder of this section will review interaction indicators.

Iii-B1 Basic Interaction Indicators

In HST, it is necessary to identify the set of key metrics to represent the performance of the interaction, as well as the ability to predict the effectiveness of the interaction [27]. These key metrics can serve as the interaction indicators for an adaptive framework of HST.

Fig. 2: HSI including one single human and a swarm under supervisory-control.

There are three fundamental metric classes used in HSI and were introduced in Crandall et al.[27]: interaction efficiency, neglect efficiency, and attention allocation efficiency. Figure 2 shows the basic interaction loop for an HSI system. The efficiency of the interaction is commonly evaluated through interaction efficiency. The bottom loop describes the entities in the swarm. They sense the environment and produce appropriate actions corresponding to a certain level of autonomy. The efficiency of the entities performing the task without the attention of the human operator is assessed by neglect efficiency. The attention allocation efficiency is a metric class used to capture the efficiency with which the human operator allocates his/her attention among multiple entities. These three metric classes are dependent on one another and also dependent on the level of autonomy that the swarm component possesses.

Interaction Efficiency

The interaction efficiency metric class comprises different metrics discussed in the literature. The most popular one is the interaction time which is the amount of time needed for a human to manage one single entity in the swarm [35]. When dealing with multiple entities in the environment, this metric can be extended to [36]:


where IEm is the Interaction Efficiency for multiple entities and is the number of agents the human interacts with at time . The term denotes a function describing the relationship between the swarm size and the time needed to manage the swarm. In the simplest, case this relationship might be linear in the increase of the number of agents in the swarm: .

Neglect Efficiency

Basically the neglect efficiency can be assessed by the neglect tolerance expressed by the time an agent can be ignored before the error exceeds a threshold [37]. The neglect time has a direct relationship to the preservation of acceptable performance. Improving the neglect time is one goal of a successful HRI system, whereby the agent has enough capability to deal with the task. The neglect tolerance is not exactly an indicator that we use under the class of interaction indicators, but more under the class of automation indicators. However, we still mention this metric here because it has an indirect impact on reducing the interaction effort.

Interaction Effort

This metric can provide information of how a particular interface design affects the overall effectiveness of the interaction. Interaction effort is not only physically defined by the interaction time, but includes cognitive effort [26]

of subtask choice, information requirement of the new situation after a choice, planning, and intent translation. When interacting with multiple agents, the interaction effort can be estimated indirectly via neglect tolerance and Fan-out (the maximum number of agents the human is able to control effectively):

Attention Allocation Efficiency

When a human operates a swarm of multiple entities, the human must neglect some agents and focus his/her attention on controlling one agent. Therefore, the effectiveness of the HSI can also rely on another metric class called attention allocation efficiency. This metric class includes SA of the entire swarm and environment, the switching time and the time the human makes decision on which agent to switch his/her attention to.

Iii-B2 Interventions

Intervention metrics are used to estimate the cognitive and physical efforts of human when interacting with an autonomous robot in HRI. There are two kinds of interactions: planned and unplanned; unplanned interactions are understood as interventions [38].

Steinfeld et al. [29] has referred to intervention metrics as non-planned interaction metrics which can be used in robot navigation task. The intervention metrics include: the average number of interventions over a time period, the time required for interventions, and the effectiveness of intervention [39]. The efficiency of the interaction can be also evaluated through the ratio of intervention time to autonomy time [40]. For example, if the operator time is 1 minute to give a navigation instruction to robots, and then the robots complete the navigation task in 10 minutes, the ratio is 1:10.

Again, in the case of HSI, this category of metrics has a strong connection to the level of autonomy that the swarm component possesses. In the case there is shared control initiative and the possibility of negotiation between human and automation, it is essential to identify extra measures such as the percentage of requests for assistance created by robot, the percentage of requests for assistance created by human, and the number of insignificant interventions of human operator [29].

Iii-B3 Communication

One practical problem in HSI is the factors impacting the communication channels between the human and the swarm including latency and bandwidth, especially in the case of teleoperation or remote interaction with a large swarm. The problem of limited bandwidth was mentioned in [41] in an attempt to design an effective interface for HSI, in which the centralized user interface is responsible for human command broadcasting as well as integrating the information of the whole swarm to visualize them for the human operator. Kolling et al. [42] reported a series of HSI experiments with different bandwidths. The conclusion supported the claim that the higher bandwidth offered larger capacity for multiple robots’ states acquisitions in a time step. An increase in latency caused degradation of interactions [29, 43]. The problems mentioned above affect the effectiveness and the efficiency of the HSI because they impact the asynchrony of interaction among swarm members and delays in the bidirectional interactions. A solution for these problems can be a predictive display using swarm dynamics and bandwidth information.

The concepts introduced in this section are summarized in Figure 3, which presents the relationship between the top two interaction indicators, namely effectiveness and efficiency, with the sub-metrics and factors from human, automation and interface components.

Fig. 3: A synthesized tree of interaction indicators for adaptive HSI system.

Iii-C Mission Complexity

Mission complexity could be defined as the amount of mental workload a mission will potentially exerts on a human. Human mental workload can negatively hinder the success of a mission that relies on a collaboration between the human and the swarm. The persistent premise in related literature is that the nature of the mission impacts human mental workload. Both objective and subjective factors have an impact on the perceived mission complexity and the human performance [44]. This section discusses only objective factors. We will divide the factors contributing to the complexity of a mission into three groups, depending on the component that generates these factors: either the swarm, the interface, or the structure of the mission.

Swarm characteristics can have a direct impact on human cognitive load. The level of autonomy of the swarm was shown to be an important factor of mission complexity. Ruff et al. [45] studied the workload associated with different levels of autonomy while controlling a group of four UAVs. They found that manual control resulted in the highest workload. Similarly, Riley et al. [46] found that manual control of a robot resulted in a considerable workload in a search and rescue task. Mi et al. [47] also argued that manual operation dramatically increases the workload on the operator. However, semi-autonomous swarms also require devoting considerable cognitive resources as the human has to understand a plethora of information arriving from the swarm [48] in order to maintain high level of situation awareness.

The size of the swarm can also result in increasing the workload requirements. Ruff et al. [45] found that increasing the number of UAVs results in increasing the perceived workload. Furthermore, this increase is sharp in the case of manual control. However, by providing scalable control methods rather than controlling individual members, the workload can remain constant. Kolling et al. [49] proposed controlling the swarm in a foraging task using two control methods: selection and beacon. They showed that the number of human instructions didn’t change significantly across different swarm sizes. Pendleton [50] used three control methods in a foraging task: leader, predator, or stakeholder. They found that using these control methods doesn’t result in a significant change in the workload across different swarm sizes.

The interface between the human and the swarm can also contribute to mission difficulty. Pendleton [50] found that operator’s workload is affected by the control method such that both control by a leader and a stakeholder results in a lower workload than control by a predator.

Some research study the effect of information visualization on mission complexity. The amount of information presented can affect cognitive load such that too little information results in increasing uncertainty and leads humans to integrate information from other sources like their own assumptions, which could result in an increase in cognitive load [51]. Too much information, on the other hand, caused information overload and makes the human overwhelmed with a large amount of information that may exceed their cognitive capacity. Van der Land et al. [52] argued that low-level information negatively impacts operators’ cognitive load as they have to process it to build higher levels of SA [53].

The method for presenting mission related information is equally important. This can be explained using cognitive fit theory which indicates that the efficiency of problem solving is enhanced when there is a match between information presentation and the mission [54], in which case the human uses the information directly without the need to mentally convert it into a representation that fits the mission.

The choice of the display technology may have implications on SA and workload. Ruiz et al. [55] compared the use of different display technologies in multi-UAV operations. They found that virtual reality (VR) based immersive screen results in the best operator’s SA and the lowest cognitive load. Besides, they found that while VR glasses outperform standard screen in terms of improving SA, this improvement comes at the cost of increasing workload.

Mission complexity can also result from the structure of the mission and how it is executed. For instance, the existence of sub-tasks that are executed concurrently adds to the human workload. Liu et al. [56] pointed out that time constraints can result in task concurrency which leads to higher mission complexity by increasing the information load. Chen [57] argued that switching between tasks can degrade performance as there can be interference between task related information. This interference increases if tasks are similar with respect to stimuli, processing stages, or required responses [58]. Moreover, the number of alerts or interruptions can have an impact on the cognitive load. Humans may need long time (up to 7 seconds) to recover from interruptions and switch back to the interrupted task [57].

The concepts introduced in this section are composed in Fig 4, showing how they can be combined to build an estimation of the Mission Complexity.

Fig. 4: A portfolio of Mission Complexity metrics.

Iii-D Swarm Automation Level

Finding effective metrics for the analysis of a swarm, as stated in [42] is still an open problem. In order to define the degree of automation of a swarm we will start from the literature for Human-Robot Interaction, where automation is usually considered in terms of Neglect Tolerance [26, 27].

Neglect Tolerance is often regarded as a static metric for the quality of an HRI system. Crandall et al. [35] presented a concept of Neglect Tolerance which is slightly different from the one commonly found in literature, along with a method for evaluating it while the system is running. In order to preserve consistency both inside this paper and with the literature, we will report that idea with a slightly different terminology. For this purpose we introduce the concept of Human Dependence, a measure of how much a robot is in immediate need of human intervention. This concept corresponds to the composition of two different sub-metrics: Neglect Tolerance, which describes how the performance of the robot decreases while it is being neglected, and Interaction Efficiency, which describes how the performance of the robot increases when a human starts interacting with it after a period of neglect. Both of these values are affected by the level of autonomy of the robot (e.g. a highly autonomous robot will not suffer much from being neglected, but will also experience reduced gains from human interaction), the complexity of the current situation, and previous history of interaction/neglect. The performance of a robot can then be described by the following equation:


where denotes performance, denotes performance while the human is interacting with the robot, denotes performance while the human is neglecting the robot, denotes current level of autonomy, denotes the complexity of the situation, and denote the times since the start of the current interaction/neglect, and denote the time the robot had been neglected before the start of the current interaction.

In the Human Swarm Interaction literature, it is possible to find metrics and techniques to estimate the complexity the swarm is currently dealing with. Some useful metrics are described in [33] as follows:

  • Cohesion: Evaluating the connectivity level of swarm.

  • Diffusion: Assessing the convergence and separation of swarm members.

  • Center of gravity: Aiming to minimize the distance from the central point to other points in the spatial distribution of the swarm.

  • Directional Accuracy: Measuring the accuracy between the swarm’s movement and the desired travelling path.

  • Flock thickness: Measuring the swarm’s density.

  • Resource depletion: Qualifying the irreversible consumption of limited resources by swarm members.

  • Swarm health: Evaluating the current status of the swarm.

In particular, swarm health is an important aspect for determining the difficulties faced by the swarm, and it can be decomposed following the analysis in [32] into the following sub-components:

  • Number of stragglers: studied in [59] as the number of fish of a school distant at least 5 body lengths from any other fish. This can reflect difficulties encountered by the swarm like obstacles in the environment or conflicting commands.

  • Subgroups number and size: as explained in [60], the number and size of subgroups can vary due to obstacles or as a way to perform the task more efficiently. In a swarm, subgroups can be identified and measured using clustering algorithms.

  • Collision count: also from [59], this is the number of collisions between members of the swarm. If a collision avoidance system is in place, this could be the number of times this system had to intervene.

Another factor that should be accounted for in an HSI system is Neglect Benevolence, which is a consequence of the fact that a swarm needs some time to stabilize after receiving an instruction before being ready to receive further instructions. In [61] this concept is formally defined and analyzed, leading to a complex algorithm for finding the optimal intervention time that requires computing the convergence time for the swarm with input given at different times. In practice, it may be possible to estimate the current value of Neglect Benevolence empirically utilizing the time since the last human intervention and the factors that [62] reported to be influenced by Neglect Benevolence: Directional Accuracy and Cohesion.

The concepts introduced in this section are composed in Fig 5, showing how they can be combined to build an estimation of the Swarm Automation Level.

Fig. 5: Metrics for Swarm Automation Level.

Iii-E Human States

An effective adaptive system for HST should be able to appropriately adjust its behaviour to fit the current situation, based on information from humans, swarm, and context. We will not consider possible physical interaction between a human and a swarm to be within the scope of this paper. Instead, we will limit our scope to problem solving tasks, where the physical interaction (through keyboard, mouse, joysticks, or the alike) imposes minimum load on the human that could be neglected, but where the cognitive load plays more substantial role on human performance.

Integrating human cognitive states into adaptive systems is a critical step towards effective and efficient human-swarm teaming for two reasons. First, real-time assessment of human’s cognitive states, such as cognitive workload, fatigue and attention, enables the system to adjust itself to maintain the human states within a safe envelope. This is particularly useful in scenarios where human mistakes caused by overload/underload or fatigue could potentially result in hazardous consequences. Second, human cognitive states can be translated into meaningful guidance for adaptation (e.g., swarm level of autonomy). It becomes pertinent to the adaptive HST system to have a clear understanding of human cognitive states. While there is a myriad of studies that rely on subjective metrics for workload, they are unsuitable for real-time adaptation. For example, NASA-TLX ([63]) is a very well-known subjective method to estimate workload. However, this method can not be used in real-time applications despite its common use in off-line analysis of experiments.

In this section, we focus real-time assessment of human’s cognitive states using psycho-physiological markers. There exist several modalities for the objective measurement of human cognitive states. Electroencephalography (EEG) can be considered as one of the most common modality to estimate cognitive demands. In many studies frequency domain of the EEG signals has been used to estimate human’s mental workload [64]. Cognitive loads can affect power (e.g., power spectral density) of EEG spectrum in different frequency bands (e.g., theta). Event related potential (ERP) is also sensitive to the changes of mental workload. Heart rate (HR) is another form of physiological signal that can be used as a mental workload indicator. However, HR can be vulnerable to detect other forms of workload as well (e.g., physiological activity). Electromyography (EMG) has been applied to analyze the mental workload of the human operator monitoring traffic density [65]. Transcranial Doppler sonography and functional near infrared (fNIR) are two hemodynamic methods that have been considered in mental workload analysis. In the Doppler sonography technique, cerebral blood flow velocity (CBFV) is correlated with the mental activity required by tasks. Eye-related measures such as pupillometry, fixations, and blinks have been applied to determine the driver mental workload. For instance, it has been shown that there is a positive relationship between the workload and blink latency (for review refer to [66]).

EEG is emerging is the type of modality that will likely offer a more robust and practical estimates of workload in a real-time environment. The temporal resolution of the EEG output is appropriate for real-time applications. Zhao et al. [67] demonstrated the effect of mental fatigue on EEG signals recorded during a simulated driving task. They found that relative power (ratio between the power of each band and the power of total band) in theta band increased in the occipital, frontal and central regions as an effect of fatigue (Note that the relative power in each region was calculated by averaging relative powers of all electrodes in the corresponding region). In addition the relative power of alpha band increased in four regions namely parietal, temporal, central, and occipital. On the other hand, Beta rhythm saw a decrement in the temporal, frontal, and central regions. The work also investigated the effect of mental fatigue on event related potential (ERP). They reported a significant decrease in the amplitude of P300 in Fz and Cz electrodes.

EEG rhythms have been used to investigate the difference between strategies taken by expert and novice shooters [68]. It was demonstrated that the power of theta band in frontal midline is related to focused attention. In the study conducted by Matthews et al. [69] workload indicators were extracted from subjects performing threat and change detection task scenarios. These two scenarios were also combined to analyze the workload metrics in dual task scenarios. Based on their results, EEG Task Load Index (TLI, ratio of theta power at Fz to alpha power at Pz) metric can be used to distinguish between the dual-task and single-task performances. In another study, the entropy of discrete wavelet transform (DWT) coefficients extracted from EEG signals was used to estimate seven levels of the cognitive workload ([70]). However, this method of mental workload estimation could not be appropriate for real-time applications and as a result is not considered in our proposed framework.

Moreover, EEG and ERP indicators were used to switch between two modes of performing tasks, manual and automated, as well as to explore effectiveness of adaptive autonomy in the task performance ([71]). EEG engagement index () was used to estimate the mental workload of participants performing multi attribute battery tracking task, and in turn to determine the task mode. Participants involved in the adaptive autonomy group showed better performance and lower level of subjective workload scores compared to the control groups, which indicates the efficacy of this EEG index for the estimation of mental workload. In the same study P300 was also analyzed when participants were asked to perform a second task (auditory oddball task) while performing the first task. It has been shown that the amplitude of P300 is sensitive to the mental resources available to perform a task ([72]). Therefore, P300 was used as another indicator of effectiveness of the adaptive autonomy. The adaptive autonomy group had a better performance in auditory oddball task compared to control groups and showed a larger P300 amplitude. This results prove that performing task in adaptive autonomy mode can free more attentional resources for performing a secondary task. In our proposed framework these two mental workload metrics, namely and P300, can be used to change the level of autonomy in multitask scenarios. The former index, along with other related indicators, will be used to adapt the interaction related to the main task (ongoing task), while the latter one can be used to adapt the interaction related to secondary tasks (interrupting tasks).

Figure 6 summarizes the relevant brain regions and sensor locations on the scalp and significant frequency bands and peaks to measure three human cognitive state indicators, namely focused attention, workload and fatigue from the existing literature.

Fig. 6: The human cognitive state indicators.

Iv Synthesis: The MICAH Framework

In the previous sections we described five different types of indicators, showed how they can be computed with methods already known in the literature, and highlighted why they should be relevant for an adaptive HST system. We believe that an effective adaptive HST system should contain five components, each addressing one of the five types of indicators; we named this structure with the acronym MICAH, and a visual summary of it is presented in Figure 7.

Fig. 7: MICAH: categories of indicators used for adaptation in HST

The five types of indicators can be summarized in the following way with the letter contributing to the abbreviation MICAH underlined:

  • Mission Performance: composed of effectiveness and efficiency, it is the primary objective of the system and should never be disregarded;

  • Interaction: describes how productive the interaction between the human and the swarm is; monitoring this indicator gives insight into the current interaction mode;

  • Mission Complexity: studies how the task, interface and swarm contribute to the workload for the human; it is an important factor in determining the performance of the human and the complexity of the mission at a particular point of time to trigger appropriate level of automation;

  • Automation level: analyzes the performance of the swarm and its need for human intervention, which are fundamental inputs to correctly set the level of autonomy;

  • Human cognitive states: assesses the mental conditions of the human, determining for example if they are overloaded or underloaded and allowing the system to adapt accordingly

The main purpose of MICAH is being a synthesis of indicators needed to design adaptive Human-Swarm Teaming systems. A practical system does not need to use the exact indicators described in this paper, but the five components should all be considered by the adaptation manager.

An example of adaptive HST could be a setup where a swarm needs to patrol an area containing a number of checkpoints, and to succeed each checkpoint needs to be visited repeatedly with a delay no higher than a fixed threshold. The human interacts with the swarm by teleoperating one or more robots, and the autonomy of the swarm is determined by the number of robots under direct control of the human. In this case, the adaptive manager could consider the following indicators:

  • Mission Performance: Number of checkpoints reached in the time limit, average time to revisit a checkpoint

  • Interaction: Performance of robots guided by the human compared to the autonomous ones

  • Mission Complexity: Number of subswarms and current level of autonomy

  • Automation: Alignment and cohesion of the swarm

  • Human cognitive states: Workload perceived by the human

By assessing these indicators all the components of MICAH are considered, and the system has all the information needed to correctly deliberate on the best adaptation.

The closed-loop adaptive systems this paper targets are the ones that can appropriately modify their behaviors to fit the needs of the current context, to meet the changing needs of the human operators, while maintaining good performance, without explicit instructions from the human operators. The purpose of designing such adaptive systems for human-swarm interaction is to combine the capabilities of human and swarms to maximize their potentials, in order to achieve effective and efficient teamwork.

Fig. 8: Diagram of the closed-loop adaptive system for human-swarm teaming.

To illustrate the framework of the adaptive system for HST, Figure 8 depicts the system flow diagram, taking a pattern of four steps: context assessment, MICAH indicators, adaptive control and adaptation. With the four-step cycle, the framework is able to sense the context information, including the human’s cognitive states, the swarm states, the mission, system and environment states; integrate and translate these information into representative indicators, the MICAH indicators; update adaptive control dynamically based on the indicators; then finally adopt appropriate adaptations in five aspects, namely swarm setting, control modes, autonomy level, interaction modes and visualization.

The dynamic updates of these aspects enables the system to adjust itself to meet the human’s cognitive requirements, while ensuring mission success. For example, the psychophysiological sensors in the context assessment module collects the human operator’s cognitive information, which is translated into a series of human states indicators, including workload indicator, fatigue indicator and focused attention indicator. Integrated with information of the current task and system states, these human states indicators (workload, fatigue and attention) are used by the adaptive controller to decide adaptations. For example, high workload and fatigue might compromise the human operator’s performance so the system raises the autonomy level of swarm to let the human operator only focus on the most critical task. While, lack of attention might cause serious consequence in the case of emergencies therefore the system lowers autonomy level and updates interaction modes and visualization to counterbalance the human states within safe range.

Conventional adaptive systems trigger adaptations in specific situations or for particular tasks, return to regular operation and disengage the activation once the situation or task is finished. However, as the situation and context evolve, the once adequate adaptation strategy applied by conventional adaptive systems might become inadequate [73]. To address this problem, the MICAH framework provides dynamic adaptations based on indicators from extensive aspects, therefore is capable of adjusting itself to the situation, the human operator and the context. The MICAH framework acts as an intelligent assistant that unobtrusively observes and comprehends the human operator’s actions, cognitive states and the evolving context and situations, and provides appropriate adjustments automatically. The adaptations are flexible and dynamic and can counterbalance the human states and maximize the overall performance.

V Conclusion

This paper leverages literature from human machine interaction, swarm intelligence and adaptive systems, focusing on identifying significant information for designing closed-loop adaptive systems for effective and efficient bi-direction communication in human-swarm teaming. It defines the core concepts of HST and proposes an integrated model, the MICAH framework, for bringing together the multitude of indicators required to design closed-loop adaptive systems. We began with a discussion defining human-swarm teaming and its properties, then identified some major challenges in HST with particular attention to autonomy and closed-loop adaptive systems. Five groups of indicators, summarized as MICAH, were proposed for adaptive control to monitor and balance human mental states while maximizing the human-swarm teaming potential to complete the mission successfully.

The MICAH framework extends existing concepts of adaptive systems, which mainly focused on task allocation, to fit swarm systems in order to achieve effective and efficient human-swarm teaming. The main features of this line of research are summarized as follows.

  1. We take the evolving context, human’s changing cognitive states and overall performance into consideration for adaptive control, with a particular focus on human-swarm teaming. Mission performance factors, interaction factors, complexity factors, swarm-related factors, and human-related factors are for the first time integrated together for adaptation decisions.

  2. We identified five groups of indicators, summarized as MICAH, for HST. MICAH is capable of sensing the context information, including human’s cognitive states, swarm states, mission states, and system states; integrating and translating this information into representative indicators for adaptation in swarm setting, control modes, autonomy level, interaction modes, and visualization.

  3. From the perspective of autonomy, MICAH supports dynamic and flexible adaptations that can be used in both adaptive autonomy systems and mixed-initiative systems. From the perspective of adaptive control, MICAH supports seamless supervision or monitoring of the human states and system that it can adapt itself to fit the evolving context, human’s changing cognitive states and overall performance. This flexible adaptive control is an extension to current once adequate adaptations.


  • [1] J. L. Dyer, “Team research and team training: A state-of-the-art review,” Human factors review, vol. 26, pp. 285–323, 1984.
  • [2] B. B. Morgan Jr, E. Salas, and A. S. Glickman, “An analysis of team evolution and maturation,” The Journal of General Psychology, vol. 120, no. 3, pp. 277–291, 1993.
  • [3] J. R. Katzenbach and D. K. Smith, The discipline of teams.   Harvard Business Press, 1993.
  • [4] A. C. Edmondson, Teaming: How organizations learn, innovate, and compete in the knowledge economy.   John Wiley & Sons, 2012.
  • [5] N. J. Castellan, Individual and group decision making: current issues.   Psychology Press, 2013.
  • [6] J. Y. Chen and M. J. Barnes, “Human–agent teaming for multirobot control: A review of human factors issues,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 1, pp. 13–29, 2014.
  • [7] E. Salas, N. J. Cooke, and M. A. Rosen, “On teams, teamwork, and team performance: Discoveries and developments,” Human factors, vol. 50, no. 3, pp. 540–547, 2008.
  • [8] K. Sycara and M. Lewis, “Integrating intelligent agents into human teams.” 2004.
  • [9] T. J. Wiltshire, D. Barber, and S. M. Fiore, “Towards modeling social-cognitive mechanisms in robots to facilitate human-robot teaming,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 57, no. 1.   SAGE Publications Sage CA: Los Angeles, CA, 2013, pp. 1278–1282.
  • [10] K. Sycara and G. Sukthankar, “Literature review of teamwork models,” Robotics Institute, Carnegie Mellon University, vol. 31, 2006.
  • [11] J. C. Joe, J. O’Hara, H. D. Medema, and J. H. Oxstrand, “Identifying requirements for effective human-automation teamwork,” Idaho National Laboratory (INL), Tech. Rep., 2014.
  • [12] M. Endsley and W. Jones, “Situation awareness, information warfare and information dominance,” Endsley Consulting, Belmont, MA, Tech. Rep, pp. 97–01, 1997.
  • [13] B. Hilburn, “19 dynamic decision aiding: the impact of adaptive automation on mental workload,” Engineering Psychology and Cognitive Ergonomics: Volume 1: Transportation Systems, 2017.
  • [14] R. Parasuraman, R. Molloy, and I. L. Singh, “Performance consequences of automation-induced’complacency’,” The International Journal of Aviation Psychology, vol. 3, no. 1, pp. 1–23, 1993.
  • [15] J. Y. Chen and M. J. Barnes, “Human–agent teaming for multirobot control: A review of human factors issues,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 1, pp. 13–29, 2014.
  • [16] C. A. Miller and R. Parasuraman, “Designing for flexible interaction between humans and automation: Delegation interfaces for supervisory control,” Human factors, vol. 49, no. 1, pp. 57–75, 2007.
  • [17] C. Miller, H. Funk, P. Wu, R. Goldman, J. Meisner, and M. Chapman, “The playbook approach to adaptive automation,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 49, no. 1.   SAGE Publications Sage CA: Los Angeles, CA, 2005, pp. 15–19.
  • [18] H. A. Abbass, J. Tang, R. Amin, M. Ellejmi, and S. Kirby, “Augmented cognition using real-time eeg-based adaptive strategies for air traffic control,” in Proceedings of the human factors and ergonomics society annual meeting, vol. 58, no. 1.   SAGE Publications Sage CA: Los Angeles, CA, 2014, pp. 230–234.
  • [19] C. F. Rusnock and C. D. Geiger, “The impact of adaptive automation invoking thresholds on cognitive workload and situational awareness,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 57, no. 1.   SAGE Publications Sage CA: Los Angeles, CA, 2013, pp. 119–123.
  • [20] K. M. Feigh, M. C. Dorneich, and C. C. Hayes, “Toward a characterization of adaptive systems: A framework for researchers and system designers,” Human Factors, vol. 54, no. 6, pp. 1008–1024, 2012.
  • [21] H. A. Abbass, E. Petraki, K. Merrick, J. Harvey, and M. Barlow, “Trusted autonomy and cognitive cyber symbiosis: Open challenges,” Cognitive computation, vol. 8, no. 3, pp. 385–408, 2016.
  • [22] A. Jacoff, E. Messina, and J. Evans, “A reference test course for autonomous mobile robots,” in Proceedings of the SPIE-AeroSense Conference, 2001.
  • [23] ——, “A standard test course for urban search and rescue robots,” NIST special publication SP, pp. 253–259, 2001.
  • [24] A. Howard, M. J. Mataric, and G. S. Sukhatme, “Mobile sensor network deployment using potential fields: A distributed, scalable solution to the area coverage problem,” Distributed autonomous robotic systems, vol. 5, pp. 299–308, 2002.
  • [25] A. Ganguli, J. Cortés, and F. Bullo, “Visibility-based multi-agent deployment in orthogonal environments,” in American Control Conference, 2007. ACC’07.   IEEE, 2007, pp. 3426–3431.
  • [26] D. R. Olsen and M. A. Goodrich, “Metrics for evaluating human-robot interactions,” in Proceedings of PERMIS, vol. 2003, 2003, p. 4.
  • [27] J. W. Crandall and M. L. Cummings, “Identifying predictive metrics for supervisory control of multiple robots,” IEEE Transactions on Robotics, vol. 23, no. 5, pp. 942–951, 2007.
  • [28] J. Scholtz, B. Antonishek, and J. Young, “Evaluation of a human-robot interface: Development of a situational awareness methodology,” in System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on.   IEEE, 2004, pp. 9–pp.
  • [29] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich, “Common metrics for human-robot interaction,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction.   ACM, 2006, pp. 33–40.
  • [30] S. T. Hayes and J. A. Adams, “Human-swarm interaction: Sources of uncertainty,” in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction.   ACM, 2014, pp. 170–171.
  • [31] S. Nunnally, P. Walker, A. Kolling, N. Chakraborty, M. Lewis, K. Sycara, and M. Goodrich, “Human influence of robotic swarms with bandwidth and localization issues,” in Systems, Man, and Cybernetics (SMC), 2012 IEEE International Conference on.   IEEE, 2012, pp. 333–338.
  • [32] C. E. Harriott, A. E. Seiffert, S. T. Hayes, and J. A. Adams, “Biologically-inspired human-swarm interaction metrics,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 58, no. 1.   SAGE Publications Sage CA: Los Angeles, CA, 2014, pp. 1471–1475.
  • [33]

    M. D. Manning, C. E. Harriott, S. T. Hayes, J. A. Adams, and A. E. Seiffert, “Heuristic evaluation of swarm metrics’ effectiveness,” in

    Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts.   ACM, 2015, pp. 17–18.
  • [34] D. R. Olsen Jr and S. B. Wood, “Fan-out: measuring human control of multiple robots,” in Proceedings of the SIGCHI conference on Human factors in computing systems.   ACM, 2004, pp. 231–238.
  • [35] J. W. Crandall, M. A. Goodrich, D. R. Olsen, and C. W. Nielsen, “Validating human-robot interaction schemes in multitasking environments,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 35, no. 4, pp. 438–449, 2005.
  • [36] S. C. Kerman, “Methods and metrics for human interaction with bio-inspired robot swarms,” 2013.
  • [37] M. A. Goodrich and D. R. Olsen, “Seven principles of efficient human robot interaction,” in Systems, Man and Cybernetics, 2003. IEEE International Conference on, vol. 4.   IEEE, 2003, pp. 3942–3948.
  • [38] H.-M. Huang, E. Messina, and J. Albus, “Toward a generic model for autonomy levels for unmanned systems (alfus),” NATIONAL INST OF STANDARDS AND TECHNOLOGY GAITHERSBURG MD, Tech. Rep., 2003.
  • [39] J. Scholtz, B. Antonishek, and J. Young, “Evaluation of operator interventions in autonomous off-road driving,” NATIONAL INST OF STANDARDS AND TECHNOLOGY GAITHERSBURG MD, Tech. Rep., 2003.
  • [40] H. A. Yanco, J. L. Drury, and J. Scholtz, “Beyond usability evaluation: Analysis of human-robot interaction at a major robotics competition,” Human–Computer Interaction, vol. 19, no. 1-2, pp. 117–149, 2004.
  • [41] J. McLurkin, J. Smith, J. Frankel, D. Sotkowitz, D. Blau, and B. Schmidt, “Speaking swarmish: Human-robot interface design for large swarms of autonomous mobile robots.” in AAAI spring symposium: to boldly go where no human-robot team has gone before, 2006, pp. 72–75.
  • [42] A. Kolling, P. Walker, N. Chakraborty, K. Sycara, and M. Lewis, “Human interaction with robot swarms: A survey,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 1, pp. 9–26, 2016.
  • [43] P. Walker, S. Nunnally, M. Lewis, A. Kolling, N. Chakraborty, and K. Sycara, “Neglect benevolence in human control of swarms in the presence of latency,” in Systems, Man, and Cybernetics (SMC), 2012 IEEE International Conference on.   IEEE, 2012, pp. 3009–3014.
  • [44] D. C. Maynard and M. D. Hakel, “Effects of objective and subjective task complexity on performance,” Human Performance, vol. 10, no. 4, pp. 303–330, 1997.
  • [45] H. A. Ruff, S. Narayanan, and M. H. Draper, “Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles,” Presence: Teleoperators and virtual environments, vol. 11, no. 4, pp. 335–351, 2002.
  • [46] J. M. Riley and M. R. Endsley, “The hunt for situation awareness: Human-robot interaction in search and rescue,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 48, no. 3.   SAGE Publications Sage CA: Los Angeles, CA, 2004, pp. 693–697.
  • [47] Z.-Q. Mi and Y. Yang, “Human-robot interaction in uvs swarming: a survey,” Int. J. Comput. Sci. Issues, vol. 10, no. 2, pp. 273–280, 2013.
  • [48] M. L. Cummings, “Human supervisory control of swarming networks,” in 2nd annual swarming: autonomous intelligent networked systems conference, 2004, pp. 1–9.
  • [49] A. Kolling, S. Nunnally, and M. Lewis, “Towards human control of robot swarms,” in Proceedings of the seventh annual ACM/IEEE international conference on human-robot interaction.   ACM, 2012, pp. 89–96.
  • [50] B. Pendleton and M. Goodrich, “Scalable human interaction with robotic swarms,” in proceedings of the AIAA Infotech@ Aerospace Conference, 2013.
  • [51] S. Van Der Land, A. P. Schouten, F. Feldberg, B. Van Den Hooff, and M. Huysman, “Lost in space? cognitive fit and cognitive load in 3d virtual environments,” Computers in Human Behavior, vol. 29, no. 3, pp. 1054–1064, 2013.
  • [52] J. M. Riley, L. D. Strater, S. L. Chappell, E. S. Connors, and M. R. Endsley, “Situation awareness in human-robot interaction: Challenges and user interface requirements,” Human-Robot Interactions in Future Military Operations, pp. 171–192, 2010.
  • [53] J. M. Riley and L. D. Strater, “Effects of robot control mode on situation awareness and performance in a navigation task,” in Proceedings of the Human Factors and Ergonomics Society annual meeting, vol. 50, no. 3.   SAGE Publications Sage CA: Los Angeles, CA, 2006, pp. 540–544.
  • [54] I. Vessey, “Cognitive fit: A theory-based analysis of the graphs versus tables literature,” Decision Sciences, vol. 22, no. 2, pp. 219–240, 1991.
  • [55] J. Ruiz, A. Viguria, J. Martinez-de Dios, and A. Ollero, “Immersive displays for building spatial knowledge in multi-uav operations,” in Unmanned Aircraft Systems (ICUAS), 2015 International Conference on.   IEEE, 2015, pp. 1043–1048.
  • [56] P. Liu and Z. Li, “Task complexity: A review and conceptualization framework,” vol. 42, pp. 553–568, 11 2012.
  • [57] J. Y. Chen and M. J. Barnes, “Human–agent teaming for multirobot control: A review of human factors issues,” IEEE Transactions on Human-Machine Systems, vol. 44, no. 1, pp. 13–29, 2014.
  • [58] C. E. Harriott, “Workload and task performance in human-robot peer-based teams,” Ph.D. dissertation, Vanderbilt University, 2015.
  • [59] J. K. Parrish, S. V. Viscido, and D. Grunbaum, “Self-organized fish schools: an examination of emergent properties,” The biological bulletin, vol. 202, no. 3, pp. 296–305, 2002.
  • [60] I. Navarro and F. Matía, “A proposal of a set of metrics for collective movement of robots,” in Proc. Workshop on Good Experimental Methodology in Robotics, 2009.
  • [61] S. Nagavalli, L. Luo, N. Chakraborty, and K. Sycara, “Neglect benevolence in human control of robotic swarms,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on.   IEEE, 2014, pp. 6047–6053.
  • [62] P. M. Walker, A. Kolling, S. Nunnally, N. Chakraborty, M. Lewis, and K. P. Sycara, “Investigating neglect benevolence and communication latency during human-swarm interaction.” in AAAI Fall Symposium: Human Control of Bioinspired Swarms, 2012.
  • [63] S. G. Hart and L. E. Staveland, “Development of nasa-tlx (task load index): Results of empirical and theoretical research,” Advances in psychology, vol. 52, pp. 139–183, 1988.
  • [64] G. Borghinia, L. Astolfia, G. Vecchiatoa, D. Mattiaa, and F. Babilonia, “Measuring neurophysiological signals in aircraft pilots and car drivers for theassessment of mental workload, fatigue and drowsiness,” Neuroscience and Biobehavioral Reviews, pp. 58–75.
  • [65] M. Fallahi, M. Motamedzade, R. Heidarimoghadam, A. R. Soltanian, and S. Miyake, “Effects of mental workload on physiological and subjective responses during traffic density monitoring: a field study,” Applied ergonomics, vol. 52, pp. 95–103, 2016.
  • [66] G. Marquart, C. Cabrall, and J. de Winter, “Review of eye-related measures of drivers’ mental workload,” Procedia Manufacturing, vol. 3, pp. 2854–2861, 2015.
  • [67] C. Zhao, M. Zhao, J. Liu, and C. Zheng, “Electroencephalogram and electrocardiograph assessment of mental fatigue in a driving simulator,” Accident Analysis & Prevention, vol. 45, pp. 83–90, 2012.
  • [68] M. Doppelmayr, T. Finkenzeller, and P. Sauseng, “Frontal midline theta in the pre-shot phase of rifle shooting: differences between experts and novices,” Neuropsychologia, vol. 46, no. 5, pp. 1463–1467, 2008.
  • [69] G. Matthews, L. E. Reinerman-Jones, D. J. Barber, and J. Abich IV, “The psychometrics of mental workload: Multiple measures are sensitive but divergent,” Human factors, vol. 57, no. 1, pp. 125–143, 2015.
  • [70] P. Zarjam, J. Epps, F. Chen, and N. H. Lovell, “Estimating cognitive workload using wavelet entropy-based features during an arithmetic task,” Computers in biology and medicine, vol. 43, no. 12, pp. 2186–2195, 2013.
  • [71] L. J. Prinzel III, F. G. Freeman, M. W. Scerbo, P. J. Mikulka, and A. T. Pope, “Effects of a psychophysiological system for adaptive automation on performance, workload, and the event-related potential p300 component,” Human factors, vol. 45, no. 4, pp. 601–614, 2003.
  • [72] J. B. Isreal, G. L. Chesney, C. D. Wickens, and E. Donchin, “P300 and tracking difficulty: Evidence for multiple resources in dual-task performance,” Psychophysiology, vol. 17, no. 3, pp. 259–273, 1980.
  • [73] S. Fuchs and J. Schwarz, “Towards a dynamic selection and configuration of adaptation strategies in augmented cognition,” in International Conference on Augmented Cognition.   Springer, 2017, pp. 101–115.