Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

08/19/2018 ∙ by Aya Hussein, et al. ∙ UNSW Canberra 0

Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Human Swarm Interaction (HSI) is a growing research field [1][2] that deals with the interface between a human and a swarm of robots within a mission. This interface is not limited to the graphical user interface, but to the overall interaction interface including protocols for interactions. A swarm of robots consists of a group of individual robots, typically with limited processing capabilities [3], whose local interactions can result in a complex behaviour [4], e.g. flocking. The ability to generate complex behaviour from local interactions results in two advantageous properties: scalability and flexibility [5]. In addition, the distributed nature of the swarm makes them robust to failures. Thus, swarm robotics have the potential to be used in different applications including material transformation [6], agriculture [7], urban search and rescue [8], monitoring and surveillance [9], and space exploration [10].

Nevertheless, the human element is necessary in swarm operations so as to make sure that swarm behaviours align with the objectives of the mission [4]. Moreover, humans are more capable of working in complex and dynamic environments. Thus, the interest in HSI has been growing in recent years [1].

As humans become part of the interaction, it becomes crucial to consider human factors that affect human performance throughout the mission. Therefore, designing an interaction paradigm that takes human factors into consideration improves not only human performance but also the overall mission performance [11]. A realisation of such an interaction scheme exists in flexible autonomy systems in which task distribution and interface customisation can be contingent on the state of the mission including the human. Although systems with flexible autonomy have been shown to be superior to rigid systems with fixed autonomy, many aspects of flexible autonomy are still poorly understood. The limited understanding of these aspects can be partially attributed to the difficulty of setting human experiments for evaluating potential design options. In this work, we discuss how modelling systems with flexible autonomy can be beneficial for investigating the impacts of different automisation strategies on human states and mission performance.

The rest of the paper is organised as follows: we start by describing possible roles a human can take within a mission, in Section 2. Then, in Section 3, we discuss human issues that can impact and be impacted by the state of the mission. Next, in Section 4, we discuss two properties of the swarm that are relevant to flexible autonomy. Next, we investigate how flexible autonomy can cater for effective interaction by changing the level of autonomy within a mission, in Section 5. In Section 6, we elaborate on a few open research problems confronting the formation of a concrete understanding of aspects of flexible autonomy as well as the potential benefits of using system modelling within the context of flexible autonomy in HSI. Finally, conclusions are given in Section 7.

Ii Human Roles in HSI

Many studies have shown that the human element can be beneficial or even essential for the success of swarm operations [2] [9]. While humans have superior cognitive abilities that enable them to deal with dynamic and unstructured environments, robots can perform specific and repetitive tasks precisely and quickly. Combining the capabilities of humans and robots can improve the success rate of complex missions.

However, humans can be assigned different roles in each mission. According to Scholtz [12], there are five roles that humans can take in human-robot interactions (HRI). These roles are: supervisor, operator, teammate, bystander, and mechanic. As a supervisor, the human is mainly responsible for evaluating the overall situation against mission objectives. Thus a supervisor would be expected to get involved in mission level decisions or modify high level plans. On the other extreme, a human operator is required to control and monitor low level tasks on action level, such as the case of motor control. In such a case, robots are considered as extensions of humans’ physical capabilities that can be remotely deployed in harsh or risky environments. For human teammates, they work with robots towards achieving mission goals. During the interaction, they can provide the robots with high level commands without modifying the overall goal. Obviously, there is no strict boundary between different human roles. For example, a human can work with the robot as a teammate, but may switch to a more supervisory role to modify a mission level plan or objective.

Finally, and less related to our scope, come the roles of bystander and mechanic. A bystander doesn’t interact explicitly with the robot, but their existence may result in changing low level actions, for example to avoid colliding with him/her. A mechanic interacts physically with the robot to modify abnormal behaviors or adjust its physical components.

Previous research in HSI showed that humans perform better when they act as supervisors than operators [13]. For example, Kolling et al. [14] found that in simulated foraging missions, even naïve robot swarms that act in a fully autonomous mode outperform human operators controlling swarms low level actions. Nevertheless, they found that humans are better at adapting to unstructured environments. Given that real environments are usually complex and dynamic, the merits of involving a human in the loop to guide swarm operations can be seen. Thus, assigning more supervisory roles to the human could be a prudent choice to improve mission success.

Iii Human Factors in HSI

Discussing different roles of the human in HSI provides primitive guidelines on how humans can be designated within a mission, but leaves the details of task assignment unspecified. While the details of task distribution and interface design are mission specific, human factors within the interaction should be considered when designing a certain interaction scheme. The human factors community has devoted considerable efforts to studying human situational awareness, human workload, and human level of trust towards the automation as these factors and has identified that they are significant factors to human performance within a mission. The details of these factors are discussed in the following subsections.

Iii-a Situational Awareness

Human interaction with a robot swarm can be proximate in which the human shares the same physical environment as the swarm, e.g [3], or remote in which the human exists in a different environment and interacts with the swarm typically through a computer terminal [9]. In both situations, the human has to maintain contextual and situational awareness and understanding of the current state of the swarm, the relevant aspects of the surrounding environment, and the progress of the mission.

According to Endsley [15], situational awareness (SA) is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”. Many studies have shown that SA has a significant impact on human decision making and ultimately on task performance [15] [11]. For instance, Riley et al. [16] found a significant positive correlation between human monitoring performance and their level of SA. Poor SA was found to be the reason behind many problems in robot assisted tasks [17][8] [18] as it limits the human ability to detect and intervene to solve emergent problems. These studies raised the profile of SA in semi-autonomous systems.

A widely used model for SA is Endsley’s three-level model [15] in which the first level describes human knowledge of the state of relevant elements in the environment, the second level reflects the degree to which he/she integrates this data to understand the overall current situation, and the third level describes his/her ability to make relevant predictions in the near future.

A number of techniques have been used to measure human SA. Situation Awareness Global Assessment Technique (SAGAT) [15] is a widely used knowledge-based method in which the human takes part in a simulated mission. The simulation gets frozen at random points of time so that the operator can answer some questions measuring different levels of SA. The provided answers can then be evaluated against the correct answers. Despite their ability to measure SA directly, knowledge-based techniques cannot be applied in real missions because they interrupt mission execution and because the correct answers are not known in advance.

A useful computational model for SA should consider the main factors that form and affect it. A taxonomy of factors affecting SA in HRI is provided in [18]. These factors can belong to the task (e.g. its level of autonomy), the system (e.g. communication characteristics), the environment (e.g. complexity), personal skills (e.g. experience and cognitive abilities), or the interface ( e.g. level of information fusion). Furthermore, the dynamic nature of SA should also be captured, for example the relationship between SA and the level of workload on the human [17].

Iii-B Workload

Another important factor that influences human performance within a mission is the level of workload imposed on the human. Firstly, the human ability to develop and maintain the desired level of SA was found to be affected by the level of workload. Endsley et al. [17] argued that in scenarios characterised by very high levels of workload that exceed human cognitive resources, humans may not be able to attend to all the available information, which can result in significant drop in the level of SA.

Moreover, the effect of workload on human performance was investigated in many studies [19] [20] [9] [21]. Findings suggest that both very low levels and very high levels of workload can cause human performance degradation. Very low levels of workload can result in arousal decrements that causes "out-of-the-loop" problem [11]. When the humans are "out-of-the-loop" they become more like observers than supervisors such that their ability to intervene to correct for system failures decrease substantially [22]. On the other hand, as the workload exceeds human cognitive capacity, human performance is expected to decline [23]. Thus, it is important that the workload is maintained within the acceptable range to increase the effectiveness of the human in the operation.

Workload is a well understood human factor that has been receiving considerable attention from researchers in different fields. Early studies considered human workload as consisting of objective and subjective components [24]. While the objective component consists of factors stemming from task structure, the interface, or the environment; the subjective component is made up of factors belonging to the human performer of the task including their experience, cognitive abilities, skills, and self confidence.

Studies on the objective factors of workload in HRI showed that workload is affected by the level of autonomy of the robot [25] [26], such that at low levels, the human becomes responsible for planning and executing low level actions which results in considerable workload [25]. As the level of autonomy increases, human functions are not eliminated, but the nature of human tasks becomes more supervisory. For these supervisory roles, the human becomes responsible for monitoring the performance and making mission-level decisions that may require him/her to attend to and integrate large amounts of information [27].

In addition, the number of interruptions (e.g, alerts or threats) and the frequency of task switching can increase the workload imposed on the human. Interruptions can hinder the smooth execution of the task at hand [22]

and increase the probability of errors 

[28]. It has been shown that it can take humans a long time to recover from interruptions and restore main-task related SA [20]. Consistent task switching has also been associated with significantly slower responses and higher errors [20]. Thus, multitasking can incur substantial increases in workload [22], particularly when the similarity between tasks increases in terms of their presentation or demands on similar cognitive resources [29].

A number of computational models has been proposed for describing workload in HRI and related fields. In [30], Donmez et al. proposed a model for a human supervising multiple heterogeneous unmanned vehicles (UVs) using discrete event simulation (DES). In their model, the human is represented as a serial server that processes a queue of tasks which can be generated from the UVs or from the external environment. A model for server characteristics was used in which the human attention allocation strategy and the effects of the level of workload on attention efficiency are used to determine server performance in terms of service time. The level of autonomy of the UVs is represented as the rate of arrival of UV tasks. Rusnock et al. [31] also used DES to model cognitive workload in HRI in military applications. They found a high correlation between the predicted levels of workload and those measured by task load index (NASA-TLX).

The subjective component of workload has received much less interest in HRI and HSI. However, there are some evidences that some skills and traits can mitigate the subjective workload on the human. These skills include: effective use of working memory [17], attention allocation [18] and multi-tasking [32], task related experience [18], and spatial abilities [33]. A variety of psycho-physiological measures like electroencephalography (EEG) and heart-rate variability has been used to assess the overall workload on the human. A recent review on different techniques for workload assessment can be found in [34].

Iii-C Trust

In their seminal paper [35], Lee et al. defined human trust in an agent as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability”. This attitude has a main impact on human tendency to delegating tasks to the agent so as to lessen the complexity of the task [36]. However, both overtrust and distrust are detrimental to mission performance [20]. Overtrust can result in over-relying on the automation despite its limitations which may lead to catastrophic consequences [35]. For example, in [37] participants opted to follow robot instructions in emergency evacuation scenarios even when the robot provided circuitous routes to the exit. Distrust, on the other hand, may lead people to reject the automation, hence missing its potential benefits [35]. It is therefore evident that trust calibration according to the real capabilities of the swarm can be crucial to mission success and enhanced performance.

Many factors were found to impact trust development in HRI; these factors can be related to the human (e.g. prior experience), the robot (e.g. performance), or the environment (team shared mental models). However, a quantitative meta-analysis on factors affecting trust revealed that robot performance is the foremost contributer to trust [38].

The persistent premise in relevant literature [35] [39] [40] is that human trust in automation is of a dynamic nature, with the dynamic component being mainly influenced by automation performance. The most widespread method for assessing human trust in automation is using surveys, e.g [41] [39]. Nonetheless, the intrusive nature of surveys make them impractical for measuring real-time trust within a mission. Computational models can offer a convenient and non-intrusive way for predicting the level of trust during the mission. Clare [42] used system dynamics (SD) modelling to develop a model for human trust in automation in a mission with multiple UV’s. In this model, trust was represented as a state that positively changes with the increase of the automation performance perceived by the human. Based on the value of trust, the rate of human interventions with automation operation is calculated such that more trust leads to a lower amount of interventions. The effect of workload on human added value was incorporated by using a workload-performance table. The model was also used in [43] to predict the performance in urban search and rescue (USAR) tasks. It was able to accurately predict the performance within 2.3%. A similar endeavour is found in [44], in which Boubin et al. used system dynamics to model human compliance and reliance behaviours as impacted by the levels of trust and stress. Other probabilistic [45] and linear [46] [40]

models have also been proposed for estimating real-time trust.

Iv Swarm Levels of Automation and Levels of Autonomy

Swarm’s contributions to mission performance can be attributed to both its level of automation and autonomy [47]. Lee et al. [35] defines automation as “technology that actively selects data, transforms information, makes decisions, or controls processes”. Thus, automation refers to the capabilities of the swarm and its capacity to perform a given task. The level of automation of a swarm for a certain task can be measured using its level of human dependence, as proposed in [47].

The level of autonomy, on the other hand, refers to the degree of freedom given to the swarm. Abbass et al. 

[36] defined autonomy as “the freedom to make decisions subject to, and sometimes in spite of, environmental constraints according to the internal laws and values that govern the autonomous agent”. Mi et al. [26] defined four levels of autonomy for UV swarms: full autonomy, machine-oriented semi autonomy, human-oriented semi-autonomy, and manual operation. A full autonomous swarm is given the freedom to perform the task without any intervention from the human. However, studies showed that achieving this level of autonomy for a swarm performing real applications is still far from reach [1] [13] [26]. Both semi-autonomous levels of autonomy imply shared task performance. While in the machine-oriented setup the swarm performs autonomously most of the time and just informs the human of important events, in a human-oriented setting, the swarm often relies on human instructions for decision making. The least level of autonomy, manual operation, requires the human operator to make all the decisions and perform the actions of the swarm.

A careful selection of the level of autonomy of a swarm is crucial for mission performance [47]. Levels of autonomy that exceeds the level of automation can lead to a performance drop due to overreliance that ignores the limitations of the swarm. On the contrary, levels of autonomy inferior to the swarm level of automation results in underutilization of the swarm, hence missing some of its benefits.

Nevertheless, it is not only swarm characteristics that determine the appropriate level of autonomy, human factors have their say as well. Higher levels of autonomy can be beneficial or even necessary to mitigate the high workload on the human  [13]. Yet, such increased autonomy was criticised for its possible negative impact on SA [16] [48]. For instance, Gombolay et al. [49] found that although humans prefer highly autonomous robot teammates, their SA about team actions decrease with the increase of autonomy. Human trust towards the swarm is also an issue to be considered, as selecting an autonomy level that exceeds the level of trust may lead the human to reject the automation.

V Flexible Autonomy for Effective Interaction

Fixing the level of autonomy throughout the interaction can be problematic as it results in a rigid system that does not adapt to real-time changes [47]. Such a setting was found to be associated with undesirable ramifications including complacency and skill degradation on the long term [47]. Consequently, designing an interaction scheme that takes mission requirements, swarm characteristics, and human factors into consideration will likely lead to enhanced performance.

With flexible autonomy, smarter systems that respect the dynamics of the interaction can be achieved. Systems with flexible autonomy can invoke different levels of autonomy during the mission based on the current state. Chen et al. [20]classified these systems into three classes based on who invokes the decision to change the autonomy: adaptable, adaptive, and mixed-initiative systems. In adaptable systems, humans are designated to invoke the appropriate level of autonomy during the interaction. However, adaptable autonomy has been critisised for adding the workload associated with autonomy decisions to the human. On the other side, in adaptive systems these decisions are made by the automation based on its estimation of the current state so that the associated management load on the human is waived. Nevertheless, the main drawback of this arrangement is that the delegation authority is in the hands of the automation rather than the human. An eclectic solution that combines the advantages of the adaptable and adaptive approaches is found in mixed-initiative systems which allow for decision making that is shared between the human and the automation. These systems can integrate adaptable and adaptive components such that the adaptive component is activated under special circumstances like hard time constraints, while the adaptable component is active otherwise. An exemplar mixed-initiative system for HSI is depicted in Fig 1. The system is based on the work in [47].

Fig. 1: A Mixed-initiative system for HSI.

Invoking a certain level of autonomy can be realised by not only applying changes to task allocation (between the human and the automation), but also by changing the interface [28]. Different interfaces were found to result in different levels of SA [18] [16] [50] , workload [50] [51] , trust [52] [53] , and performance [9] [52]. Elements that can be changed within the interface include display method [50], interface modality and design [28], amount of information [47], level of information [18], and degree of information fusion [54] .

The decision to change the level of autonomy of the swarm in adaptive autonomy can be triggered by relevant changes in the state of the system as evaluated by the automation. A number of research works has studied the virtues of adaptive systems triggered by the workload on the human. In [55], Hilburn found that workload triggered adaptive systems used in air traffic control resulted in the biggest benefit to human mental workload as it avoided very low and very high workload in low and high traffic scenarios, respectively. Abbass et al. [56] compared the effectiveness of workload-based adaptive systems using different techniques for evaluating workload: EEG signal, task complexity cues, or both. They found that participants perceived task complexity to be lower and their performance to be better when the workload is evaluated using EEG only than when it is evaluated using task complexity or both EEG and task complexity together. Rusnock et al. [57] studied the effects of using different workload thresholds for invoking the adaptive autonomy on workload and SA. They found that the proper selection of the threshold depends on the task, such that in some cases increasing the threshold can result in increasing the workload but improving SA, while in others increasing the threshold can increase the workload without benefiting SA.

Besides the workload, other indicators can also be useful in determining the proper level of autonomy. Feigh et al. [28] proposed a taxonomy of triggers that can be used in adaptive systems. These triggers can be spatio-temporal-based or can be based on changes in the state of: the human, the automation, the environment, or the task. Recently, Hussein et al. [47] identified five classes of indicators that can be used for assessing the state of the system in HSI. These classes are: performance indicators, interaction indicators, complexity indicators, swarm automation indicators, and human state indicators. Despite the potential effectiveness of these indicators in accurately evaluating the relevant state of the mission, it is not well understood how these indicators could be combined to select the proper level of autonomy.

Unlike the question of what state indicators should be used to assess the state of the system, the question regarding the implementation details of adaptive autonomy is much less understood. Designing an effective adaptive autonomy component requires established understanding of the following issues:

  • How values from different indicators should be combined?

  • How can the adaptive autonomy invocation threshold be set?

  • Which tasks within the mission should be changed in its level of autonomy?

  • How can automated tasks be handed back to the human?

  • How can the interface be customised to help the human restore the SA of a previously automated task?

  • What are the effects of a certain autonomy decision on human future state?

Mixed-initiative systems including adaptive and adaptable components should be based on the understanding of some more issues as well. These issues are:

  • When should the adaptive component be activated?

  • How to combine the adaptable and adaptive components?

  • How to inform the human of the adaptation decision to avoid undesired confusion?

The next section discusses how system modelling can contribute to this understanding.

Vi Modelling Mixed-Initiative in HSI: Challenges and Opportunities

A number of challenges can encumber the design and implementation of a practical mixed-initiative system. We discuss these challenges and investigate how modelling techniques can contribute to the solution.

While using a variety of indicators can lead to more precise assessment of the state of the mission, it can lead to less effective systems than those using single indicators; e.g. [56]

. Thus, the question of how values of different indicators can be fused to decide on whether autonomy adaptation should be triggered, becomes worthy of careful inspection. Different techniques can be applied to solve this problem. For instance, a rule-based approach can be deployed in which certain ranges of values are associated with certain autonomy levels. Alternatively, machine learning approaches can be used. Both alternatives need sufficient data representing different scenarios in order to be validated and/or trained. It is theoretically possible to perform human experiments to generate the required data. However, as participants should be exposed to scenarios representing the space of possible situations, the resulting temporal and financial requirements of these experiments can make this process impractical 

[31]. Additionally, with some safety critical scenarios being difficult to replicate within human experiments, the adequacy of simulated missions should be carefully assessed as they can be perceived differently by participants [37].

Understanding when adaptive autonomy should be triggered is among the first steps towards designing the adaptive component. In addition, the design of a mixed-initiative system needs to determine when the adaptive and the adaptable components should be activated. Both components can be active throughout the mission while a negotiation algorithm is used to select the final decision. Another alternative is to activate the adaptive component only during temporally critical scenarios. These design decisions should also be based on some empirical evidence.

Once the adaptation decision is taken, changes to task autonomy, the interface, or both must be activated. This can be challenging as it depends on understanding the issues mentioned in section 5. Unless mixed-initiative systems are designed to consider the mentioned issues, undesired ramifications can take place. For instance, a decision to fully automate a highly dynamic and mentally demanding task can relieve the high workload on the human, yet the difficulty of restoring its SA when returning it back to the human can counteract the benefits of flexible autonomy. Similarly, automating a task that requires higher capacity than the level of automation of the swarm can mitigate human’s workload, however the expected performance drop may eventually lead the human to distrust the automation and hence become less willing to rely on it. Thus, an adaptation decision that is based solely on the current state of the system without taking into consideration the probable consequences can eventually lead to performance degradation.

Different modelling techniques have been successfully applied to model some aspects of HRI: workload, allocation strategy, and attention [30]; trust, reliance, automation performance, and human performance [42]; and workload [31] . These models were able to replicate data from real experiments with high accuracy, thus indicating the potential benefits of using system modelling techniques within HRI and HSI .As these models were designed to capture only some facets of the interaction, they can be used as a seed for an improved and comprehensive model that considers more aspects relevant to mixed-initiative systems, as shown in Figure 1.

Models for mixed-initiative systems need to be based on a concrete understanding of the following areas:

  • How subjective factors like skills, cognitive abilities, experience, and personality traits can affect human SA, workload, and trust.

  • How human factors (SA, workload, and trust) can affect and are affected by the state of the mission.

  • Swarm levels of automation and performance for different levels of autonomy.

  • Different interaction styles humans deploy in HSI.

  • How different aspects of the interface affect human SA, workload, and trust.

Although the task of designing such a model requires considerable knowledge across different domains and is far from being trivial, its potential benefits makes the process worth it. Models for mixed-initiative autonomy would allow the designers to investigate the merits of different design decisions like how to combine human preferences and adaptive autonomy recommendations to select a certain level of autonomy, when to trigger the adaptive autonomy algorithm, and how to realise a certain level of autonomy in terms of task assignment and interface customisation. Different strategies can then be evaluated under different scenarios with respect to both the resulting mission performance and the cognitive and skill requirements on the human.

By considering the effects of the interface on different human factors, the virtues of different interface features (e.g. levels of transparency and information fusion) can also be examined. Modelling interface issues would benefit from the literature on task complexity e.g [24] and [29].

Most of the proposed models for human performance capture only the objective factors (e.g. automation performance and task complexity) that affect human SA, workload, and trust. Yet, subjective factors can also lead to significant changes in human performance, as argued earlier. Although subjective factors can be harder to quantify and incorporate within a model, they can beget considerable benefits by providing insight into whether and how different people would exhibit different behaviours during a mission. Sophisticated models for SA and its relation with workload and trust can help designers examine the degree to which hiring humans with certain levels of experience and skills can be significant to mission success. Besides, these models make it possible to anticipate the performance gain and implications on the human state resulting from upgrading to a swarm with higher levels of automation.

Vii Conclusion

This paper discussed open challenges that hinder the design of mixed-initiative systems in HSI. We investigated how system modelling can contribute to the solution.

Properly designed mixed-initiative HSI systems provide an effective interaction scheme that respects the dynamic nature of human factors and the capabilities of the swarm. However, the design of these systems are based on some factors that are still poorly understood, as discussed in section 5. Understanding these issues can be achieved using extensive human experiments, which can be impractical in terms of the time and monetary requirements. Fortunately, the accurate results achieved by previous models for human factors suggest that system modelling techniques can be used to establish the required understanding while minimising the need for human experiments.

While previous systems tended to focus on some human factors while leaving others, models for mixed-initiative systems need to consider the three factors together. The adaptation decision can be based on the current levels of these factors as well as the expected effect on their future values.

As swarm characteristics, task structure, and the interface affect human factors within a mission, these aspects need to be incorporated within models for mixed-initiative systems. Requirements for these models in terms of the knowledge base and the aspects to be considered within the model have been discussed in section 6. Models satisfying these requirements would provide system designers with a handy tool that can be used to investigate a wide range of different design options and its implications on the required skills and abilities on the human.

Acknowledgement

This work was funded by the Australian Research Council Discovery Grant number DP140102590 and UNSW-Canberra.

References

  • [1] Eduardo Castelló Ferrer. A wearable general-purpose solution for human-swarm interaction. arXiv preprint arXiv:1704.08393, 2017.
  • [2] A. Kolling, P. Walker, N. Chakraborty, K. Sycara, and M. Lewis. Human interaction with robot swarms: A survey. IEEE Transactions on Human-Machine Systems, 46(1):9–26, Feb 2016.
  • [3] Alessandro Giusti, Jawad Nagi, Luca M Gambardella, and Gianni A Di Caro. Distributed consensus for interaction between humans and mobile robot swarms. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 3, pages 1503–1504. International Foundation for Autonomous Agents and Multiagent Systems, 2012.
  • [4] Jacob W Crandall, Nathan Anderson, Chace Ashcraft, John Grosh, Jonah Henderson, Joshua McClellan, Aadesh Neupane, and Michael A Goodrich. Human-swarm interaction as shared control: Achieving flexible fault-tolerant systems. In International Conference on Engineering Psychology and Cognitive Ergonomics, pages 266–284. Springer, 2017.
  • [5] Brian Pendleton and Michael Goodrich. Scalable human interaction with robotic swarms. In proceedings of the AIAA Infotech@ Aerospace Conference, 2013.
  • [6] Jianing Chen, Melvin Gauci, and Roderich Groß. A strategy for transporting tall objects with a swarm of miniature mobile robots. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 863–869. IEEE, 2013.
  • [7] Timo Blender, Thiemo Buchner, Benjamin Fernandez, Benno Pichlmaier, and Christian Schlegel. Managing a mobile agricultural robot swarm for a seeding task. In Industrial Electronics Society, IECON 2016-42nd Annual Conference of the IEEE, pages 6879–6886. IEEE, 2016.
  • [8] Yugang Liu and Goldie Nejat. Robotic urban search and rescue: A survey from the control perspective. Journal of Intelligent & Robotic Systems, 72(2):147, 2013.
  • [9] Carmine Tommaso Recchiuto, Antonio Sgorbissa, and Renato Zaccaria. Visual feedback with multiple cameras in a uavs human–swarm interface. Robotics and Autonomous Systems, 80:43–54, 2016.
  • [10] W. Truszkowski, M. Hinchey, J. Rash, and C. Rouff. Nasa’s swarm missions: the challenge of building autonomous software. IT Professional, 6(5):47–52, Sept 2004.
  • [11] Mica R Endsley. From here to autonomy: lessons learned from human–automation research. Human factors, 59(1):5–27, 2017.
  • [12] Jean Scholtz. Theory and evaluation of human robot interactions. In System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on, pages 10–pp. IEEE, 2003.
  • [13] Amy Hocraffer and Chang S Nam. A meta-analysis of human-system interfaces in unmanned aerial vehicle (uav) swarm management. Applied ergonomics, 58:66–80, 2017.
  • [14] Andreas Kolling, Steven Nunnally, and Michael Lewis. Towards human control of robot swarms. In Proceedings of the seventh annual ACM/IEEE international conference on human-robot interaction, pages 89–96. ACM, 2012.
  • [15] Mica R Endsley. Situation awareness global assessment technique (sagat). In Aerospace and Electronics Conference, 1988. NAECON 1988., Proceedings of the IEEE 1988 National, pages 789–795. IEEE, 1988.
  • [16] Jennifer M Riley and Laura D Strater. Effects of robot control mode on situation awareness and performance in a navigation task. In Proceedings of the Human Factors and Ergonomics Society annual meeting, volume 50, pages 540–544. SAGE Publications Sage CA: Los Angeles, CA, 2006.
  • [17] MR Endsley and WM Jones. Situation awareness, information warfare and information dominance. Endsley Consulting, Belmont, MA, Tech. Rep, pages 97–01, 1997.
  • [18] Jennifer M Riley, Laura D Strater, Sheryl L Chappell, Erik S Connors, and Mica R Endsley. Situation awareness in human-robot interaction: Challenges and user interface requirements. Human-Robot Interactions in Future Military Operations, pages 171–192, 2010.
  • [19] Sara E McBride, Wendy A Rogers, and Arthur D Fisk. Understanding the effect of workload on automation use for younger and older adults. Human factors, 53(6):672–686, 2011.
  • [20] Jessie YC Chen and Michael J Barnes. Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1):13–29, 2014.
  • [21] Grace Teo, Lauren Reinerman-Jones, Gerald Matthews, James Szalma, Florian Jentsch, and Peter Hancock. Enhancing the effectiveness of human-robot teaming with a closed-loop system. Applied Ergonomics, 67(Supplement C):91 – 103, 2018.
  • [22] Choon Yue Wong and Gerald Seet. Workload, awareness and automation in multiple-robot supervision. International Journal of Advanced Robotic Systems, 14(3):1729881417710463, 2017.
  • [23] H Abbass, Jiagnjun Tang, Rubai Amin, Mohamed Ellejmi, and Stephen Kirby. The computational air traffic control brain: computational red teaming and big data for real-time seamless brain-traffic integration. J Air Traffic Control, 56(2):10–17, 2014.
  • [24] Donald J Campbell. Task complexity: A review and analysis. Academy of management review, 13(1):40–52, 1988.
  • [25] Jennifer M Riley and Mica R Endsley. The hunt for situation awareness: Human-robot interaction in search and rescue. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 48, pages 693–697. SAGE Publications Sage CA: Los Angeles, CA, 2004.
  • [26] Zhen-Qiang Mi and Yang Yang. Human-robot interaction in uvs swarming: a survey. Int. J. Comput. Sci. Issues, 10(2):273–280, 2013.
  • [27] Missy L Cummings. Human supervisory control of swarming networks. In 2nd annual swarming: autonomous intelligent networked systems conference, pages 1–9, 2004.
  • [28] Karen M Feigh, Michael C Dorneich, and Caroline C Hayes. Toward a characterization of adaptive systems: A framework for researchers and system designers. Human Factors, 54(6):1008–1024, 2012.
  • [29] Christopher Wickens. Processing resources in attention, dual task performance, and workload assessment. page 59, 07 1981.
  • [30] Birsen Donmez, Carl Nehme, and Mary L Cummings. Modeling workload impact in multiple unmanned vehicle supervisory control. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 40(6):1180–1190, 2010.
  • [31] Christina F Rusnock and Christopher D Geiger. Using discrete-event simulation for cognitive workload modeling and system evaluation. In IIE Annual Conference. Proceedings, page 2485. Institute of Industrial and Systems Engineers (IISE), 2013.
  • [32] Gary Kay, D Dolgin, B Wasel, M Langelier, and C Hoffman. Identification of the cognitive, psychomotor, and psychosocial skill demands of uninhabited combat aerial vehicle (ucav) operators. 30:16, 06 1999.
  • [33] Florian Jentsch. Human-robot interactions in future military operations. CRC Press, 2016.
  • [34] Jamison Heard, Caroline E Harriott, and Julie A Adams. A survey of workload assessment algorithms. IEEE Transactions on Human-Machine Systems, 2018.
  • [35] John D Lee and Katrina A See. Trust in automation: Designing for appropriate reliance. Human factors, 46(1):50–80, 2004.
  • [36] Hussein A Abbass, Eleni Petraki, Kathryn Merrick, John Harvey, and Michael Barlow. Trusted autonomy and cognitive cyber symbiosis: Open challenges. Cognitive computation, 8(3):385–408, 2016.
  • [37] Paul Robinette, Wenchen Li, Robert Allen, Ayanna M Howard, and Alan R Wagner. Overtrust of robots in emergency evacuation scenarios. In Human-Robot Interaction (HRI), 2016 11th ACM/IEEE International Conference on, pages 101–108. IEEE, 2016.
  • [38] Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart J De Visser, and Raja Parasuraman. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5):517–527, 2011.
  • [39] Kristin E Schaefer. Measuring trust in human robot interactions: Development of the "trust perception scale-hri". In Robust Intelligence and Trust in Autonomous Systems, pages 191–218. Springer, 2016.
  • [40] X Jessie Yang, Vaibhav V Unhelkar, Kevin Li, and Julie A Shah. Evaluating effects of user experience and system transparency on trust in automation. In HRI, pages 408–416, 2017.
  • [41] Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1):53–71, 2000.
  • [42] Andrew S Clare. Modeling real-time human-automation collaborative scheduling of unmanned vehicles. Technical report, Massachusetts Inst of Tech Cambridge Dept of Aeronautics and Astronautics, 2013.
  • [43] Fei Gao, Andrew S Clare, Jamie C Macbeth, and ML Cummings. Modeling the impact of operator trust on performance in multiple robot control. AAAI, 2013.
  • [44] Jayson G Boubin, Christina F Rusnock, and Jason M Bindewald. Quantifying compliance and reliance trust behaviors to influence trust in human-automation teams. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 61, pages 750–754. SAGE Publications Sage CA: Los Angeles, CA, 2017.
  • [45] Anqi Xu and Gregory Dudek. Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 221–228. ACM, 2015.
  • [46] Kumar Akash, Wan-Lin Hu, Tahira Reid, and Neera Jain. Dynamic modeling of trust in human-machine interactions. In American Control Conference (ACC), 2017, pages 1542–1548. IEEE, 2017.
  • [47] Aya Hussein, Leo Ghignone, Tung Nguyen, Nima Salimi, Hung Nguyen, Min Wang, and Hussein A Abbass. Towards bi-directional communication in human-swarm teaming: A survey. arXiv preprint arXiv:1803.03093, 2018.
  • [48] Jennifer M Riley and Laura D Strater. Assessing effects of robot control mode on performance and situation awareness in a maze navigation task. In Lecture presentation at the 3rd Annual Workshop on Human Factors of Unmanned Aerial Vehicles, Mesa, AZ, pages 24–25, 2006.
  • [49] Matthew Gombolay, Anna Bair, Cindy Huang, and Julie Shah. Computational design of mixed-initiative human–robot teaming that considers human factors: situational awareness, workload, and workflow preferences. The International Journal of Robotics Research, page 0278364916688255, 2017.
  • [50] JJ Ruiz, A Viguria, JR Martinez-de Dios, and A Ollero. Immersive displays for building spatial knowledge in multi-uav operations. In Unmanned Aircraft Systems (ICUAS), 2015 International Conference on, pages 1043–1048. IEEE, 2015.
  • [51] James Baumeister, Seung Youb Ssin, Neven AM ElSayed, Jillian Dorrian, David P Webb, James A Walsh, Timothy M Simon, Andrew Irlitti, Ross T Smith, Mark Kohler, et al. Cognitive cost of using augmented reality displays. IEEE Transactions on Visualization and Computer Graphics, 2017.
  • [52] Ning Wang, David V Pynadath, and Susan G Hill. The impact of pomdp-generated explanations on trust and performance in human-robot teams. In Proceedings of the 2016 international conference on autonomous agents & multiagent systems, pages 997–1005. International Foundation for Autonomous Agents and Multiagent Systems, 2016.
  • [53] Michael C Dorneich, Rachel Dudley, Emmanuel Letsu-Dake, William Rogers, Stephen D Whitlow, Michael C Dillard, and Erik Nelson. Interaction of automation visibility and information quality in flight deck information automation. IEEE Transactions on Human-Machine Systems, 47(6):915–926, 2017.
  • [54] Holly A Yanco, Adam Norton, Willard Ober, David Shane, Anna Skinner, and Jack Vice. Analysis of human-robot interaction at the darpa robotics challenge trials. Journal of Field Robotics, 32(3):420–444, 2015.
  • [55] Brian Hilburn. 19 dynamic decision aiding: the impact of adaptive automation on mental workload. Engineering Psychology and Cognitive Ergonomics: Volume 1: Transportation Systems, 2017.
  • [56] Hussein A Abbass, Jiangjun Tang, Rubai Amin, Mohamed Ellejmi, and Stephen Kirby. Augmented cognition using real-time eeg-based adaptive strategies for air traffic control. In Proceedings of the human factors and ergonomics society annual meeting, volume 58, pages 230–234. SAGE Publications Sage CA: Los Angeles, CA, 2014.
  • [57] Christina F Rusnock and Christopher D Geiger. The impact of adaptive automation invoking thresholds on cognitive workload and situational awareness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, volume 57, pages 119–123. SAGE Publications Sage CA: Los Angeles, CA, 2013.