Toward Designing Social Human-Robot Interactions for Deep Space Exploration

05/18/2021 ∙ by Huili Chen, et al. ∙ MIT 0

In planning for future human space exploration, it is important to consider how to design for uplifting interpersonal communications and social dynamics among crew members. What if embodied social robots could help to improve the overall team interaction experience in space? On Earth, social robots have been shown effective in providing companionship, relieving stress and anxiety, fostering connection among people, enhancing team performance, and mediating conflicts in human groups. In this paper, we introduce a set of novel research questions exploring social human-robot interactions in long-duration space exploration missions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

1.1. Human Factors in LDSE Missions

NASA is actively planning for long-duration space exploration (LDSE) missions such as an anticipated manned mission to Mars in the 2030s (NASA, ). In planning for future LDSE missions, one major risk area pertaining to human factors concerns the ability of astronauts to adapt to the isolated, confined, and extreme (ICE) conditions (NASA, 2020). Living in ICE conditions for a long duration is highly stressful. The psycho-environmental factors of living and working in such ICE environments, as documented by studies of these ICE analogues and evidence from past spaceflights, include crowding, lack of privacy, social isolation, and sensory restriction (Palinkas and Browner, 1995; Raybeck, 1991). The sense of isolation will become further heightened with longer distances from Earth and communication lag with Earth (NASA, 2020).

To ensure the success of a LDSE mission, adaptation is needed for both individual astronauts and the crew team as a whole. On the individual level, astronauts will need to cope with a variety of observed behavioral, physiological and psychological problems including anger, anxiety, interpersonal conflict, social withdrawal, sleep deprivation, decrease in group cohesion, and decrease in motivation (Flynn, 2005). On the team level, factors related to interpersonal communications and group dynamics among astronauts also decisively impact mission success. For example, astronauts will have to live in an ICE condition for the entirety of the mission, necessitating group living skills to combat potential interpersonal problems (Galarza and Holland, 1999). Their diverse cultural backgrounds may additionally impact coping in an ICE environment and interplanetary crew’s behavior (Tafforin and Giner Abati, 2017). Unlike shorter-duration space missions, LDSE missions require astronauts to have an unprecedented level of autonomy, leading to greater importance of interpersonal communication among crew members for mission success. The real-time support and interventions from human specialists on Earth (e.g., psychologist, doctor, conflict mediator) are reduced to minimal due to costly communication and natural delays in communication between space and Earth. All these human factor risks could lead to serious detrimental impacts on a LDSE mission if left unmitigated. To address them, socially interactive technologies such as social robots offer great opportunities of delivering effective interventions.

1.2. Human-robot Interaction (HRI) in Space Exploration

In the past decades, most research on HRI in space exploration focused on the engineering and cognitive aspects of space robotics. Space robots were often envisioned with advanced cognitive systems (e.g.,CARACaS) capable of model building, continuous planning/re-planning, self-diagnosis, and novel situation understanding (Huntsberger and Stoica, 2010). These space robots could assist crew members with a variety of physical and cognitive tasks, but they did not necessarily support social and affective communications with crew members. In recent years, sociable space robots started to emerge, able to display and recognize social cues. An in-cabin flying robot named Astronaut Assistant Robot (AAR) (Liu et al., 2016) allows astronauts to use hand gestures to communicate with it face-to-face (Gao et al., 2017). As the first free-floating, sphere-shaped interactive companion robot in space, Crew Interactive Mobile Companion (CIMON) designed by NASA and IBM (Murphy, 2018) can display human-like facial expressions on its screen and respond to voice questions or directions without the need for a tablet or computer when assisting astronauts in their daily work.

The potential benefits of social robots in space are also suggested by prior HRI work conducted on Earth, which found social robots effective in improving people’s mental health and well-being (Scoglio et al., 2019) as well as human group dynamics and team performance (Sebo et al., 2020b). In LDSE missions, the needs of astronauts for social robots will very likely become heightened given the LDSE-related human factor risks listed in Section 1.1, thereby providing a greater motivation for more research on social HRI in space. In this paper, we focus on the opportunity of using social robots as a multimodal medium for ameliorating interpersonal communications and group dynamics of crew members in LDSE missions.

2. Social HRI on Earth

2.1. Robot Social Roles in Human Groups

Social robots can play a variety of roles ranging from highly knowledgeable agents to social-emotional companions across human group interaction contexts. When engaging with a human group as a highly knowledgeable agent and expert in a task area, a social robot, for example, can provide counseling to couples (Utami and Bickmore, 2019), offer direction guide in public space (Fraune et al., 2019), host group game activities (Żarkowski, 2019), and allocate turns to players (Claure et al., 2020). In contrast to an expert role, a robot can take a peer role that strategically displays its vulnerability or emotions with the goal of ameliorating human group dynamics. For example, a robot Kip1 was designed as a peripheral conversation companion that promotes non-aggressive conversation between people by expressing either curious interest or fear via its gesture cues (Hoffman et al., 2015). Similarly, vulnerable statements made by a robot in a human group can positively impact the dynamics and collaboration among the group members (e.g., more equal conversation distribution and more positive group perception (Traeger et al., 2020)).

In addition to the expert and social-emotional companion roles, a robot can take a mediator role to directly resolve human team conflicts. For example, a social robot can improve people’s interpersonal conflict resolution skills by flagging a conflict onset and offering prompts for conflict resolution (Shen et al., 2018). Mediation via the Telenoid robot was also found to produce more agreements and more integrative agreements among human teammates in comparison with both a screen mediator and a human mediator (Druckman et al., 2020). The last social role category that a robot often takes in a human group is a moderator/supporter role. When taking this role, a robot could improve a human group’s social dynamics (Short et al., 2016) and a dyadic human team’s performance on a collaborative task by posing questions (Strohkorb et al., 2016).

Overall, a robot’s social role in a human group interaction distinctly shapes how humans interact with each other and how they perceive the robot.

2.2. Robot Social Behaviors in Human Groups

Both of a robot’s verbal and nonverbal social behaviors can positively influence human groups when displayed appropriately, though in their unique ways. Specifically, a robot’s verbal communication can support its expressions of emotion (Leite et al., 2013; Correia et al., 2018) and share informational content (Sabelli and Kanda, 2016) with humans. It can also improve a variety of affective phenomena in human-human interactions such as trust building (Correia et al., 2018), group engagement (Matsuyama et al., 2015), psychological safety of outgroup team members (Sebo et al., 2020a), equality and positivity within conversational groups (Traeger et al., 2020), and the inclusion of human members within a team (Sebo et al., 2020).

In addition to its verbal utterances, a robot’s nonverbal behaviors such as gestures (Liu et al., 2013), gaze (Mutlu et al., 2009; Skantze, 2017) navigation (Kidokoro et al., 2013; Mavrogiannis et al., 2019), and physical orientation (Vázquez et al., 2017; Shiomi et al., 2010) can influence people’s responses in a group interaction and their perception of the group (Sebo et al., 2020b). For example, robots were designed, in some interactions (Tennent et al., 2019; Fink et al., 2014), as peripheral or passive entities that shaped human group dynamics via implicit nonverbal behaviors without eliciting users’ awareness. Often inspired by Ju’s theory of implicit interactions (Ju and Leifer, 2008), this alternative design situates a robot as a passive entity in a group. For example, a robotic microphone that exhibits implicit engagement behaviors (e.g., nonverbal backchanneling) in a group problem solving context encouraged more active participation from passive human members and promoted more effective group collaboration performance (Tennent et al., 2019). As shown above, a robot’s social influence on human groups can be achieved through both its use of speech and nonverbal social cues.

Nevertheless, an excessive or inappropriate display of social cues may distract or interrupt humans in some circumstances (Kennedy et al., 2015; Yadollahi et al., 2018). For example, the frequency and length of a robot’s speech affected humans’ interaction outcomes more significantly than its content due to the disturbing effects of the inappropriate speech display (Jung et al., 2015; Short and Mataric, 2017). Penalizing high frequency of speech as the “communication cost” can yield more effective human-robot communication (Unhelkar et al., 2020). For nonverbal behaviors, a robot’s congruent display of social cues, e.g., contingent gaze and pointing gestures, can elicit greater human participation in an interaction (Lohan et al., 2012), while the asynchrony of social cues, e.g. robot head-gazes and pointing gestures made in different directions, may lead to interference effects and slow down human processing time (Langton, 2000; Langton and Bruce, 2000).

Hence, understanding the relations between the display of robot social behaviors and interaction contexts is crucial for designing successful multi-party HRI.

2.3. Human Trust in Robots

Trust is a result of dynamic interactions (Mayer et al., 1995). Successful interactions will lead to feelings of security, trust and optimism, while failed interactions may result in human’s unsecured feelings or mistrust. Much empirical evidence shows trust essential for successful human-human interactions (McAllister, 1995). In human-robot interactions, humans’ trust in robots also plays a critical role such as influencing their willingness to accept information from robots (Hancock et al., 2011) and to cooperate with robots (Freedy et al., 2007). A person’s trust in a robot is often intertwined with the robot’s task performance, display of nonverbal behaviors, and interaction personalization. A single error of the robot can impact humans’ trust of it, especially in critical situations (Robinette et al., 2017). If a robot displays nonverbal signals that humans often exhibit to indicate distrust, the robot would also be perceived as less trustworthy (DeSteno et al., 2012). In the context of workplace-based long-term HRI, a personalized human-robot discussion was found to increase the person’s rapport and cooperation with the robot as compared with a social but not personalized discussion (Lee et al., 2012). Humans’ trust in robots, given its importance in HRIs, is often used as an evaluation measure on robot decision making (Chen et al., 2018) and human-robot team effectiveness (Freedy et al., 2007) across contexts and tasks.

3. Future Social HRI in Space Exploration

Despite extensive HRI research conducted on Earth, we barely know to what extent these prior findings can be readily applied to the deep space exploration context. Contextualizing the HRI research in LDSE missions would unlock the full potential of using social robots to foster positive interpersonal communications and social dynamics among crew members in future space exploration. Hence, we propose a set of key design questions (DQ) pertaining to social HRI in space.

3.1. Robot Social Role Design

When interacting with a human group, social robots have been taking a variety of roles from resource providers to listeners, each of which offers unique benefits contingent on the interaction context. However, barely any prior work focused on the design and impact of robot roles on the group dynamics and processes of space crew teams, posing a new urgency to investigate the nuanced challenges for social HRI in space. For example, a robot mediator has been empirically shown to resolve people’s interpersonal conflicts more effectively than a screen mediator or a human mediator (Druckman et al., 2020), but the design considerations for a robot mediation in astronaut-astronaut interactions remain unexplored. Understanding astronauts’ perception, acceptance and needs of social robots in different LDSE-related interaction contexts would help design for more astronaut-centered HRI. Therefore, we propose the first two research questions as follows:

  • DQ1: What robot interactions would astronauts perceive to be socially, cognitively and affectively beneficial to the crew team in a LDSE mission?

  • DQ2: How would individual characteristics and cultural backgrounds of astronauts affect their perception, acceptance and preference of social HRI in space?

In LDSE missions, crew members engage in a variety of group interactions ranging from highly-critical team cooperation and urgent problem-solving meetings to leisure and recreational activities. In these activities, the crew team’s social dynamics may vary as the social roles of the members are adaptive to the context, e.g., superior-subordinate communication and peer interaction. The way a social robot engages with the team should thus be contingent on the team’s social dynamics. Designing a diverse set of robot social roles customized to different group interaction contexts could potentially promote the overall positive astronaut-robot interaction experience in space. We summarize this research topic as follows:

  • DQ3: What robot role(s) could be designed for each crew team interaction context in a LDSE mission?

Since all group interactions take place in an ICE environment with limited physical space and resources for a long duration, different robot roles designed for specific group contexts, e.g., coach, peer and mediator, may likely have to share a robot hardware embodiment rather than each owning a different physical embodiment. This constraint on robot hardware resources further poses additional design challenges pertaining to robot role and identity switching, as summarized below.

  • DQ4: Should a robot’s social identity (e.g., name, memory and personality) remain consistent while its social role switches across interaction contexts, as illustrated in Figure 0(a)? Should its social identity switch across contexts so that each robot role is uniquely associated with a distinct robot identity, as illustrated in Figure 0(b)?

  • DQ5: How should a physical robot embodiment identify group contexts and dynamics, and proactively switch its social role or identity to effectively promote positive crew team experience?

  • DQ6: How would the aforementioned role-switching and identity-switching approaches influence astronauts’ perception and acceptance of robot interactions such as their trust in robots?

(a) Robot Role Switching
(b) Robot Identity Switching
Figure 1. Two design approaches to address the constraint on robot hardware resources in LDSE missions.

3.2. Robot Social Behavior Design

Robots can positively influence a human group via diverse combinations of social cues, from actively utilizing animated speech to solely using nonverbal cues without eliciting human awareness. In order to be interpreted correctly and efficiently, robot social cues need be contingent on specific interactions as well as congruent with other social cues being utilized. Unlike the Earth environments where the most prior HRI studies were conducted, outer space environment imposes much more stressful challenges on human bodies and minds, resulting in a variety of expected physiological, psychological, cognitive and physical changes of astronauts. Investigating how astronauts encode and decode multimodal social cues when interacting with a robot in various physical and mental conditions would help design for space robots that can adapt their social-affective expression styles based on both physical environment and in-the-moment states of astronauts to improve human-robot and human-human communications. Thus, we summarize this design question on robot social cues as follows:

  • DQ7: How would different gravity conditions, e.g., microgravity, lunar and Martian gravity, impact astronauts’ social, cognitive and affective reactions to a variety of robot social cues, e.g., gaze sharing, body motion and speech, in both group and single-person HRI contexts? What social cues would astronauts prefer to use for human-robot communications in different gravity conditions?

Living in ICE conditions for a long duration exposes astronauts to long-term sensory restriction. Unaffected by ICE conditions, social robots hence have great potential of delivering high-quality consistent multi-sensory intervention to astronauts. This design question is summarized as follows:

  • DQ8: How could different robot social cues and behaviors be designed to provide positive multi-sensory stimulation to the crew team as well as enhance multimodal social-affective communications among crew members, e.g., rapport, social touch, and social reciprocity?

3.3. Robot Social Adaptation Design

Human adaptability for deep space environment is crucial for mission success. In addition to traditional coping and resource strategies for astronauts’ adaptation (Bartone et al., 2019), a space robot’s social adaptability to astronauts could also facilitate their adaptation to deep space with the long-term goal of maintaining their health and productivity. Multiple social-affective signal modalities of astronauts could be leveraged to develop a robot social adaptation system, including their physiological signals (e.g., blood pressure, sleep patterns), physical conditions (e.g., malnutrition), as well as their psychological and cognitive states (e.g., depressive mood, anxiety, loneliness), alongside the robot’s audiovisual observation of the naturalistic crew interactions. For example, if a robot can monitor each astronaut’s sleep quality, it can adapt its social role and social cues to minimize the negative effect of astronauts’ sleep deprivation on team decision making and other cognitive processes. The access to a more diverse set of astronauts’ social-affective signal modalities would enable a robot’s more strategic and timely adaptation. However, a robot’s overaggressive data-driven adaptation may increase the risk of exacerbating astronauts’ perceived lack of privacy in an ICE environment and weakening their interpersonal trust in the robot and the team, potentially resulting in lower overall crew team dynamics and productivity. Investigating the optimal robot adaptation in LDSE missions requires an astronaut-centered design approach. Therefore, we propose two astronaut-centered design questions as follows:

  • DQ9: What social-affective signal modalities of astronauts could a robot have access to for its long-term social adaptation?

  • DQ10: How to design a robot’s social adaptation system that can maintain and even foster crew members’ perceived privacy and interpersonal trust in the robot and other members in LDSE missions?

4. Conclusion

This paper introduces novel opportunities for social human-robot interactions in deep space exploration. We believe that exploring the design questions listed in the paper would help shed light on how social robotics could be potentially used to promote interpersonal communications and facilitate group interactions among crew members in long-duration space exploration missions.

References

  • P. T. Bartone, R. R. Roland, J. V. Bartone, G. P. Krueger, A. A. Sciarretta, and B. H. Johnsen (2019) Human adaptability for deep space missions: an exploratory study. Journal of Human Performance in Extreme Environments 15. External Links: Document Cited by: §3.3.
  • M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa (2018) Planning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, New York, NY, USA, pp. 307–315. External Links: ISBN 9781450349536, Link, Document Cited by: §2.3.
  • H. Claure, Y. Chen, J. Modi, M. Jung, and S. Nikolaidis (2020) Multi-armed bandits with fairness constraints for distributing resources to human teammates. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, New York, NY, USA, pp. 299–308. External Links: ISBN 9781450367462, Link, Document Cited by: §2.1.
  • F. Correia, S. Mascarenhas, R. Prada, F. S. Melo, and A. Paiva (2018) Group-based emotions in teams of humans and robots. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, New York, NY, USA, pp. 261–269. External Links: ISBN 9781450349536, Link, Document Cited by: §2.2.
  • D. DeSteno, C. Breazeal, R. H. Frank, D. Pizarro, J. Baumann, L. Dickens, and J. J. Lee (2012) Detecting the trustworthiness of novel partners in economic exchange. Psychological Science 23 (12), pp. 1549–1556. Note: PMID: 23129062 External Links: Document, Link, https://doi.org/10.1177/0956797612448793 Cited by: §2.3.
  • D. Druckman, L. Adrian, M. F. Damholdt, M. Filzmoser, S. T. Koszegi, J. Seibt, and C. Vestergaard (2020) Who is Best at Mediating a Social Conflict? Comparing Robots, Screens and Humans. Group Decision and Negotiation. External Links: Document, ISSN 1572-9907, Link Cited by: §2.1, §3.1.
  • J. Fink, S. Lemaignan, P. Dillenbourg, P. Rétornaz, F. Vaussard, A. Berthoud, F. Mondada, F. Wille, and K. Franinović (2014) Which robot behavior can motivate children to tidy up their toys? design and evaluation of ”ranger”. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’14, New York, NY, USA, pp. 439–446. External Links: ISBN 9781450326582, Link, Document Cited by: §2.2.
  • C. F. Flynn (2005) An operational approach to long-duration mission behavioral health and performance factors.. Aviation, space, and environmental medicine 76 (6 Suppl), pp. B42–51 (eng). External Links: ISSN 0095-6562 (Print) Cited by: §1.1.
  • M. R. Fraune, S. Šabanović, and T. Kanda (2019) Human Group Presence, Group Characteristics, and Group Norms Affect Human-Robot Interaction in Naturalistic Settings. Frontiers in Robotics and AI 6. External Links: Document, ISSN 2296-9144, Link Cited by: §2.1.
  • A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman (2007) Measurement of trust in human-robot collaboration. In 2007 International Symposium on Collaborative Technologies and Systems, Vol. , pp. 106–114. External Links: Document Cited by: §2.3.
  • L. Galarza and A. W. Holland (1999) Critical astronaut proficiencies required for long-duration space flight. In SAE Technical Paper, Vol. , pp. . External Links: Link, Document Cited by: §1.1.
  • Q. Gao, J. Liu, Z. Ju, Y. Li, T. Zhang, and L. Zhang (2017) Static hand gesture recognition with parallel cnns for space human-robot interaction. In Intelligent Robotics and Applications, Y. Huang, H. Wu, H. Liu, and Z. Yin (Eds.), Cham, pp. 462–473. External Links: ISBN 978-3-319-65289-4 Cited by: §1.2.
  • P. Hancock, D. Billings, K. Schaefer, J. Chen, E. de Visser, and R. Parasuraman (2011) A meta-analysis of factors affecting trust in human-robot interaction. Human factors 53, pp. 517–27. External Links: Document Cited by: §2.3.
  • G. Hoffman, O. Zuckerman, G. Hirschberger, M. Luria, and T. Shani Sherman (2015) Design and evaluation of a peripheral robotic conversation companion. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, New York, NY, USA, pp. 3–10. External Links: ISBN 9781450328838, Link, Document Cited by: §2.1.
  • T. Huntsberger and A. Stoica (2010) Envisioning cognitive robots for future space exploration. In Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2010, J. J. Braun (Ed.), Vol. 7710. External Links: Document, Link Cited by: §1.2.
  • W. Ju and L. Leifer (2008) The design of implicit interactions: making interactive systems less obnoxious. Design Issues 24, pp. 72–84. Cited by: §2.2.
  • M. F. Jung, N. Martelaro, and P. J. Hinds (2015) Using robots to moderate team conflict: the case of repairing violations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, New York, NY, USA, pp. 229–236. External Links: ISBN 9781450328838, Link, Document Cited by: §2.2.
  • J. Kennedy, P. Baxter, and T. Belpaeme (2015) The robot who tried too hard: social behaviour of a robot tutor can negatively affect child learning. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, New York, NY, USA, pp. 67–74. External Links: ISBN 9781450328838, Link, Document Cited by: §2.2.
  • H. Kidokoro, T. Kanda, D. Bršcic, and M. Shiomi (2013) Will i bother here? - a robot anticipating its influence on pedestrian walking comfort. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 259–266. External Links: Document Cited by: §2.2.
  • S. Langton and V. Bruce (2000) You must see the point: automatic processing of cues to the direction of social attention.. Journal of experimental psychology. Human perception and performance 26 2, pp. 747–57. Cited by: §2.2.
  • S. R. H. Langton (2000) The mutual influence of gaze and head orientation in the analysis of social attention direction. The Quarterly Journal of Experimental Psychology Section A 53 (3), pp. 825–845. Note: PMID: 10994231 External Links: Document, Link, https://doi.org/10.1080/713755908 Cited by: §2.2.
  • M. K. Lee, J. Forlizzi, S. Kiesler, P. E. Rybski, J. Antanitis, and S. Savetsila (2012) Personalization in hri: a longitudinal field experiment. 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 319–326. Cited by: §2.3.
  • I. Leite, A. Pereira, S. Mascarenhas, C. Martinho, R. Prada, and A. Paiva (2013) The influence of empathy in human-robot relations. Int. J. Hum. Comput. Stud. 71, pp. 250–260. Cited by: §2.2.
  • J. Liu, Q. Gao, Z. Liu, and Y. Li (2016) Attitude control for astronaut assisted robot in the space station. International Journal of Control, Automation and Systems 14 (4), pp. 1082–1095. External Links: Document, ISSN 2005-4092, Link Cited by: §1.2.
  • P. Liu, D. F. Glas, T. Kanda, H. Ishiguro, and N. Hagita (2013) It’s not polite to point generating socially-appropriate deictic behaviors towards people. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 267–274. External Links: Document Cited by: §2.2.
  • K. S. Lohan, K. Rohlfing, J. Saunders, C. Nehaniv, and B. Wrede (2012) Contingency scaffolds language learning. In 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), Vol. , pp. 1–6. External Links: Document Cited by: §2.2.
  • Y. Matsuyama, I. Akiba, S. Fujie, and T. Kobayashi (2015) Four-participant group conversation: A facilitation robot controlling engagement density as the fourth participant. Computer Speech and Language 33 (1), pp. 1–24. External Links: Document, ISSN 10958363 Cited by: §2.2.
  • C. Mavrogiannis, A. M. Hutchinson, J. Macdonald, P. Alves-Oliveira, and R. A. Knepper (2019) Effects of distinct robot navigation strategies on human behavior in a crowded environment. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 421–430. External Links: Document Cited by: §2.2.
  • R. C. Mayer, J. H. Davis, and F. D. Schoorman (1995) An integrative model of organizational trust. The Academy of Management Review 20 (3), pp. 709–734. External Links: ISSN 03637425, Link Cited by: §2.3.
  • D. J. McAllister (1995) Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. The Academy of Management Journal 38 (1), pp. 24–59. External Links: ISSN 00014273, Link Cited by: §2.3.
  • T. I. Murphy (2018) Hello, i am cimon!. External Links: Link Cited by: §1.2.
  • B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita (2009) Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, HRI ’09, New York, NY, USA, pp. 61–68. External Links: ISBN 9781605584041, Link, Document Cited by: §2.2.
  • [33] NASA NASA’s journey to mars. External Links: Link Cited by: §1.1.
  • NASA (2020) Risk of adverse cognitive or behavioral conditions and psychiatric disorders. External Links: Link Cited by: §1.1.
  • L. Palinkas and D. Browner (1995) Effects of prolongea isolation in extreme environments on stress, coping, and depression1. Journal of Applied Social Psychology 25, pp. 557–576. Cited by: §1.1.
  • D. Raybeck (1991) Proxemics and privacy: managing the problems of life in confined environments. In From Antarctica to Outer Space, A. A. Harrison, Y. A. Clearwater, and C. P. McKay (Eds.), New York, NY, pp. 317–330. External Links: ISBN 978-1-4612-3012-0 Cited by: §1.1.
  • P. Robinette, A. M. Howard, and A. R. Wagner (2017) Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems 47 (4), pp. 425–436 (English (US)). External Links: Document, ISSN 2168-2291 Cited by: §2.3.
  • A. M. Sabelli and T. Kanda (2016) Robovie as a Mascot: A Qualitative Study for Long-Term Presence of Robots in a Shopping Mall. International Journal of Social Robotics 8 (2), pp. 211–221. External Links: Document, ISSN 1875-4805, Link Cited by: §2.2.
  • A. A. Scoglio, E. D. Reilly, J. A. Gorman, and C. E. Drebing (2019) Use of social robots in mental health and well-being research: systematic review. J Med Internet Res 21 (7), pp. e13322. External Links: ISSN 1438-8871, Document, Link Cited by: §1.2.
  • S. Sebo, L. L. Dong, N. Chang, M. Lewkowicz, M. Schutzman, and B. Scassellati (2020a) The influence of robot verbal support on human team members: encouraging outgroup contributions and suppressing ingroup supportive behavior. Frontiers in Psychology 11, pp. 3584. External Links: Link, Document, ISSN 1664-1078 Cited by: §2.2.
  • S. Sebo, B. Stoll, B. Scassellati, and M. F. Jung (2020b) Robots in Groups and Teams: A Literature Review. Proc. ACM Hum.-Comput, pp. 37. External Links: Document, Link Cited by: §1.2, §2.2.
  • S. S. Sebo, L. L. Dong, N. Chang, and B. Scassellati (2020) Strategies for the inclusion of human members within human-robot teams. In ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, pp. 309–317. External Links: Document, ISBN 9781450367462, ISSN 21672148, Link Cited by: §2.2.
  • S. Shen, P. Slovak, and M. F. Jung (2018) ”stop. i See a Conflict Happening.”: A Robot Mediator for Young Children’s Interpersonal Conflict Resolution. In ACM/IEEE International Conference on Human-Robot Interaction, pp. 69–77. External Links: Document, ISBN 9781450349536, ISSN 21672148 Cited by: §2.1.
  • M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita (2010) A larger audience, please! — encouraging people to listen to a guide robot. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 31–38. External Links: Document Cited by: §2.2.
  • E. Short and M. J. Mataric (2017) Robot moderation of a collaborative game: towards socially assistive robotics in group interactions. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 385–390. Cited by: §2.2.
  • E. S. Short, K. Sittig-Boyd, and M. Mataric (2016) Modeling moderation for multi-party socially assistive robotics. In RO-MAN 2016, Cited by: §2.1.
  • G. Skantze (2017) Predicting and regulating participation equality in human-robot conversations: effects of age and gender. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’17, New York, NY, USA, pp. 196–204. External Links: ISBN 9781450343367, Link, Document Cited by: §2.2.
  • S. Strohkorb, E. Fukuto, N. Warren, C. Taylor, B. Berry, and B. Scassellati (2016) Improving human-human collaboration between children with a social robot. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Vol. , pp. 551–556. Cited by: §2.1.
  • C. Tafforin and F. Giner Abati (2017) Cultural ethology as a new approach of interplanetary crew’s behavior. Acta Astronautica 139, pp. 102–110. External Links: ISSN 0094-5765, Document, Link Cited by: §1.1.
  • H. Tennent, S. Shen, and M. Jung (2019) Micbot: a peripheral robotic object to shape conversational dynamics and team performance. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 133–142. Cited by: §2.2.
  • M. L. Traeger, S. S. Sebo, M. Jung, B. Scassellati, and N. A. Christakis (2020) Vulnerable robots positively shape human conversational dynamics in a human-robot team. Proceedings of the National Academy of Sciences of the United States of America 117 (12), pp. 6370–6375. External Links: Document, ISSN 10916490, Link Cited by: §2.1, §2.2.
  • V. V. Unhelkar, S. Li, and J. A. Shah (2020) Decision-making for bidirectional communication in sequential human-robot collaborative tasks. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, New York, NY, USA, pp. 329–341. External Links: ISBN 9781450367462, Link, Document Cited by: §2.2.
  • D. Utami and T. Bickmore (2019) Collaborative user responses in multiparty interaction with a couples counselor robot. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. , pp. 294–303. Cited by: §2.1.
  • M. Vázquez, E. J. Carter, B. McDorman, J. Forlizzi, A. Steinfeld, and S. E. Hudson (2017) Towards robot autonomy in group conversations: understanding the effects of body orientation and gaze. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI, Vol. , pp. 42–52. Cited by: §2.2.
  • E. Yadollahi, W. Johal, A. Paiva, and P. Dillenbourg (2018) When deictic gestures in a robot can harm child-robot collaboration. In Proceedings of the 17th ACM Conference on Interaction Design and Children, IDC ’18, New York, NY, USA, pp. 195–206. External Links: ISBN 9781450351522, Link, Document Cited by: §2.2.
  • M. Żarkowski (2019) Multi-party Turn-Taking in Repeated Human–Robot Interactions: An Interdisciplinary Evaluation. International Journal of Social Robotics 11 (5), pp. 693–707. External Links: Document, ISSN 18754805, Link Cited by: §2.1.