Using Socially Expressive Mixed Reality Arms for Enhancing Low-Expressivity Robots

11/21/2019 ∙ by Thomas R. Groechel, et al. ∙ University of Southern California 0

Expressivity–the use of multiple modalities to convey internal state and intent of a robot–is critical for interaction. Yet, due to cost, safety, and other constraints, many robots lack high degrees of physical expressivity. This paper explores using mixed reality to enhance a robot with limited expressivity by adding virtual arms that extend the robot's expressiveness. The arms, capable of a range of non-physically-constrained gestures, were evaluated in a between-subject study (n=34) where participants engaged in a mixed reality mathematics task with a socially assistive robot. The study results indicate that the virtual arms added a higher degree of perceived emotion, helpfulness, and physical presence to the robot. Users who reported a higher perceived physical presence also found the robot to have a higher degree of social presence, ease of use, usefulness, and had a positive attitude toward using the robot with mixed reality. The results also demonstrate the users' ability to distinguish the virtual gestures' valence and intent.



There are no comments yet.


page 1

page 2

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Socially assistive robots (SAR) have been shown to have positive impacts in a variety of domains, from stroke rehabilitation [26] to tutoring [9]. Such robots, however, typically have low-expressivity due to many physical, cost, and safety constraints. Expressivity in Human-Robot Interaction (HRI) refers to the robot’s ability to use its modalities to non-verbally communicate the robot’s intentions or its internal state [7]. Higher levels of expressiveness have been shown to increase trust, disclosure, and companionship with a robot [25]. Expressivity can be conveyed with dynamic actuators (e.g., motors) as well as static ones (e.g., screens, LEDs) [1]. HRI research into gesture has explored head and arm gestures, but many nonhumanoid robots partially or completely lack those features, resulting in low social expressivity [6].

Social expressivity refers to expressions related to communication of affect or emotion. In social and socially assistive robotics, social expressivity has been used for interactions such as expressing the robots emotional state through human-like facial expressions [8, 28, 20], gestures [6], and physical robot poses [5]. In contrast, functional expressivity refers to the robot’s ability to communicate its functional capabilities (e.g., using turn signals to show driving direction). Research into robot expressiveness has explored insights from animation, design, and cognitive psychology [7].

Fig. 1: This work explores how mixed reality robot extensions can enhance low-expressivity robots by adding social gestures. Six mixed reality gestures were developed: (A) facepalm, (B) cheer, (C) shoulder shrug, (D) arm cross, (E) clap, and (F) wave dance.

The importance of expressivity and the mechanical, cost, and safety constraints of physical robots call for exploring new modalities of expression, such as through the use of augmented reality (AR) and mixed reality (MR). AR refers to the ability to project virtual objects onto the real world without adherence to physical reality, while MR refers to virtual objects projected onto the real world while respecting physical reality and reacting to it.

Using virtual modalities for robots has led to the emerging field of Augmented Reality Human-Robot Interaction (AR-HRI), which encompasses AR, MR, and virtual reality (VR) robotics. AR-HRI has already had advances in the functional expression of a robot [38, 40] but has not yet explored social expressiveness. Introducing such expressiveness into AR-HRI allows us to leverage the positive aspects of physical robots–embodiment and physical affordances [10]–as well as the positive aspects of mixed reality–overcoming cost, safety, and physical constraints. This works aims to synergize the combined benefits of the two fields by creating mixed reality, socially expressive arms for low social expressivity robots (Fig. 1).

This paper describes the design, implementation, and validation of MR arms for a low-expressivity physical robot. We performed a user study where participants completed a mixed reality mathematics task with a robot. This new and exploratory work in AR-HRI did not test specific hypotheses; empirical data were collected and analyzed to inform future work. The results demonstrate a higher degree of perceived robot emotion, helpfulness, and physical presence by users who experienced the mixed reality arms on the robot compared to those who did not. Participants who reported a higher physical presence also reported higher measures of robot social presence, perceived ease of use, usefulness, and had a more positive attitude toward using the robot with mixed reality. The results from the study also demonstrate consistent ratings of gesture valence and identification of gesture intent.

Ii Background and Related Work

Ii-a Augmented Reality Human-Robot Interaction

AR and MR can be delivered through projectors [14], tablets [15], and head-mounted displays [38]. Projectors allow for hands-free communication, but are limited with respect to user input. In contrast, tablets allow for direct, intuitive, and consistent user input, but restrict users to a 2D screen, eliminating hands-free, kinesthetic interactions. An augmented reality head-mounted display (ARHMD) aims to remove those limitations by allowing consistent, direct input and hands-free interaction. ARHMDs, such as the Microsoft Hololens, allow for high quality, high fidelity, hands-free interaction.

The reality-virtuality continuum spans the range of technologies from physical reality, to differing forms of mixed reality, to full virtual reality [29]. Exploring that continuum for enhancing robot communication is a nascent area of research; Augmented Reality Human-Robot Interaction (AR-HRI) has been gaining attention [40].

Creating mixed reality experiences with robots is now possible with open-source tools

[4]. Work to date has largely focused on signalling functional intent [38, 40] and teleoperation [43, 16, 23]. For example, functional signalling used in Walker et al. [38] allowed nonexpert users to understand where a robot was going in the real world, which was especially useful for robots with limited expressive modalities.

The use of AR for HRI, however, is a very new area of research [41]. Specifically, research into AR-HRI’s to date has focused on functional expression, with little work on social expression. Survey analysis across the Miligram virtuality continuum has shown early work in social mixed reality for robots to be limited [18]. Examples include adding a virtual expressive face to a Roomba vacum cleaning robot [42] and adding a virtual avatar on a TurtleBot mobile robot [11]. To the best of our knowledge, the virtual overlays have not been pursued further since the survey was conducted in 2009, leaving opportunities open for exploring socially expressive AR-HRI design.

Ii-B Social Expressivity and Presence in HRI

Research in HRI has explored robot expressiveness extensively, including simulating human facial expressions on robots [8, 28, 20], gestures [6], and physical social robot poses [5]. Increased social expressivity has been shown to build rapport and trust [25].

For socially interactive robots, social presence depends on the ability to communicate expected social behaviors through the robot’s available modalities [6, 31]. A high degree of social presence can be beneficial to user sympathy and intimacy [19]. This effect has been validated in various domains, including museum robots [30] and robot math tutors [9]. Since physical robots are limited by cost, physical safety, and mechanical constraints, socially interactive robots often explore additional communication channels, ranging from lights [2] to dialogue systems [3]. The work presented here also explores an additional communication channel, by taking advantage of the high fidelity of MR and the lack of physical constraints to evaluate the effectiveness of mixed reality gestures on increasing social expressiveness.

Iii Robot Gesture Design and Implementation

To study MR arms, we chose a mobile robot with very low expressivieness: the Mayfield Robotics Kuri (Fig. 2), formerly a commercial product. Kuri is 50 cm tall, has 8 DoF (3 in the base, 3 between the base and head, and 1 in each of the two eyelids), and is equipped with an array of 4 microphones, dual speakers, lidar, chest light, and a camera behind the left eye. While very well engineered, Kuri is an ideal platform for the exploration of AR-HRI in general, and MR gestures in particular, because of its lack of arms.

Fig. 2: Keyframes for Kuri’s clapping animation.

Iii-a Implementation

We used the Microsoft Hololens ARHMD, which is equipped with a x field of view (FOV), an IMU, a long-range depth sensor, an HD video camera, four microphones, and an ambient light sensor. We developed the mixed reality extension prototypes in Unity3D, a popular game engine. For communication between Kuri and the Hololens, we used the open source ROS# library [4]. The full arm and experiment implementation is open-source and available at

We developed a balanced set of positive gestures (a dancing cheer, a clap, and a wave-like dance) and negative gestures (a facepalm, a shoulder shrug, and crossing the arms), as shown in Fig. 1

. We used the Unity built-in interpolated keyframe animator to develop each gesture animation and a simple inverse kinematic solver to speed up the development of each keyframe.

Iii-B Gesture Design

In designing the virtual gestures for the robot, we took inspiration from social constructs of collaboration, such as pointing to indicate desire [37], and emblems and metaphoric gestures [27], such clapping to show approval. The inclusion of such gestures goes beyond the current use of mostly audio and dance feedback in many socially assistive robot systems [22, 9, 33].

Work in HRI has explored Disney animation principles [36], typically either in simulation or with physically constrained robots [34, 13, 35]. In this work, we explored a subset of Disney principles–squash and stretch, exaggeration, and staging–in the context of MR arm gestures. Each principle was considered for its benefit over physical world constraints. Squash and stretch gives flexibility and life to animations bringing life to robots that are rigid. Exaggeration has been shown to aid robot social communication [13]. Staging was considered for its role in referencing objects using arms to allow for joint attention.

Informed by feedback from a pilot study we conducted, the animated gestures were accompanied by physical body expressions to make the arms appear integrated with Kuri. For positive gestures, Kuri performed a built-in happy gesture (named “gotit”) that involved the robot’s head moving up and emitting a happy, rising tone. For negative gestures, Kuri performed a built-in sad gesture (named “sad”) that involved the robot’s head moving down and being silent.

We also explored the use of deictic (i.e., pointing) gestures; these have been recently explored in AR-HRI but only through a video survey [40]. The gestures had both functional and social purposes, as discussed in Section IV-C.

Iv Experiment Design

A single-session experiment consisting of two parts was conducted with approval by our university IRB (UP-16-00603). The first part was a two-condition between-subjects experiment to test the mixed reality arms. All participants wore the ARHMD and interacted with both physical and virtual objects as part of a math puzzle game. The independent variable was whether participants had arms on their Kuri robot (Experiment condition) or not (Control condition). We collected subjective measures including perceived physical presence, social presence, ease of use, helpfulness, and usefulness from Heerink et al. [17], adapted for the mixed reality robot. Task efficiency was objectively measured using completion time as is standard in AR-HRI [38]. After the first part of the experiment was completed, a survey of 7-point Likert scale questions abd a semi-structured interview were administered.

The second part of the experiment involved all participants in a single condition. The participants were shown a video of each of the six MR arm gestures and asked to rate each gesture’s valence on a decimal scale from very negative (-1.00) to very positive (+1.00), as in prior work [24], and to describe verbally, in written form, what each gesture conveyed.

Iv-a Part 1: Mixed Reality Mathematics Puzzles

Participants wore the Hololens and were seated across from Kuri (Fig. 3) with a set of 20 colored physical blocks on the table in front of them. The blocks were numbered 1-9. The block shapes were: cylinder, cube, cuboid, wide cuboid, and long cuboid. The block colors were: red, green, blue, and yellow. The participants’ view from the Hololens can be seen in Fig. 4, with labels for all objects pertinent to solving the mathematics puzzle. The view included cream-colored blocks in the same variety of shapes, labeled with either a plus (+) or minus (-) sign. Participants were asked to solve an addition/subtraction equation based on information provided on the physical and virtual blocks, and virtually input the numeric answer.

Fig. 3: Participant wearing the Hololens across from Kuri (left). Two sides of a single physical cuboid block (right).
Fig. 4: View when clicking a virtual block. Kuri is displaying red on its chest and pointing to the red sphere to indicate the virtual clicked block to the corresponding physical block color. From left to right, the blocks read: 9, 1, 8, 4, 5.

Participants were shown anywhere from 1 to 8 cream-colored virtual blocks (B) for each puzzle. To discover the hidden virtual block color, participants clicked on a virtual block (by moving the Hololens cursor over it and pressing a hand-held clicker); in response, Kuri’s chest LED (D) lit up in the hidden block color. In the Experiment condition, Kuri also used MR arms to point to the virtual color indicator (E) of that color.

Once the color was so indicated, the participants selected a physical block (A) with the same shape and color. The number displayed on the physical block was part of the math equation. The + or - on the virtual block indicated whether the number should be added or subtracted. Once all virtual-to-physical correspondences for the blocks were found, participants added or subtracted the numbers into a running sum (initialized at 0), calculated the final answer, and input it into the virtual answer input (C).

At the start of the session, participants were guided through the process in a scripted tutorial (Fig. 4). They were told to click on the virtual cylinder on the far left. Once clicked, Kuri lit up its chest LED in red and pointed at the red virtual ball-shaped color indicator. Participants then grabbed the red cylinder with the number 9 on it. This process was repeated for all the blocks. The resulting sum was: {(-9), (+1), (-8), (-4), (+5)} = -15; it was input into the virtual answer.

Kuri used social gestures in response to participants’ answers in both conditions. For a correct answer, Kuri performed the positive physical gesture (“gotit”); for an incorrect answer, Kuri performed the negative physical gesture (“sad”).

In the Experiment condition, Kuri also used the positive and negative mixed reality arm gestures (Fig. 1) synchronized with the positive and negative physical gesture, respectively. We combined the physical and mixed reality gestures, as opposed to using mixed reality gestures only, based on feedback received from a pilot study. Participants in the pilot study indicated that gestures with both the body and mixed reality arms (as opposed to mixed reality arm gestures only) created a more integrated and natural robot appearance.

After the tutorial, participants attempted to solve a series of up to seven puzzles of increasing difficulty within a time limit of 10 minutes. When participants successfully input the correct answer to a puzzle, they advanced to the next puzzle. If the time limit was exceeded or all puzzles were completed, the system halted. Participants were then asked to do a survey and a semi-structured interview described in Section IV-D.

Iv-B Ensuring Gesture Presentation Consistency

The puzzle task was designed to mitigate inconsistencies across participants. The first mitigation method addressed gesture randomness and diversity. The Experiment condition used gestures from the set of positive ( = {cheer, clap, wave dance}) and negative ( = {facepalm, shoulder shrug, arm cross}) gestures. To preserve balance, we first chose gestures randomly without replacement for each set, thereby guaranteeing that each gesture was shown, assuming at least 3 correct and 3 incorrect answers. Once all gestures from a group were shown, the gestures were chosen randomly, with replacement. The Control condition did not require methods for ensuring gesture diversity since Kuri used a single way of communicating correct answers and incorrect answers.

Steps were also taken to avoid only positive gestures being shown for users who had all correct answers. First, all participants were shown an incorrect answer and gesture during the tutorial. Second, some puzzles had a single physical block with two numbers on it (Fig. 3). In those cases, participants were told that the puzzle could have two answers. If their first answer was incorrect, they were told to turn the block over and use the number on the other side. Puzzles 3-7 all had this feature. Regardless of the participant’s initial guess for these puzzles, they were told told they were incorrect and then shown a negative gesture. If the initial guess was one of the two possible answers, it was removed from the possible answers. After the initial guess, guesses were said to be correct if they were in the remaining set of correct answers. This consistency method ensured that each participant saw all of the negative gestures.

Iv-C Part 2: Gesture Annotation

All participants were shown a video of Kuri using the arm gestures, as seen in Fig. 1 and can be found at The video was recorded through the Hololens camera, giving the same view as seen by participants in the Experiment condition of the math puzzles. After the participants watched all gestures once, they were given the ability to rewind and re-watch gestures as they responded to a survey. The gesture order of presentation was initially randomly generated and then presented in that same order to all participants. In total, the second part of the experiment took 5-10 minutes.

Iv-D Measures and Analysis

We used a combination of objective and subjective measures to characterize the difference between the conditions.

Task Efficiency was defined as the total time taken to complete each puzzle. We also noted users that did not complete all puzzles within the 10 minute time limit. The post-study 7-point Likert scale questions used 4 subjective measures, adapted from Heernik et al. [17] to evaluate the use of ARHMD with Kuri. The measures were: Total Social Presence, Attitude Towards Technology, Perceived Ease of Use, and Perceived Usefulness. Total Social Presence measured the extent the robot was perceived as a social being (10 items, Cronbach’s ). Attitude Towards Technology measured how good or interesting the idea of using mixed reality with the robot was (3 items, Cronbach’s ). Perceived Ease of Use measured how easy the robot with mixed reality was to use (5 items, Cronbach’s ). Perceived Usefulness measured how useful or helpful the robot with mixed reality seemed (3 items, Cronbach’s ).

Participants rated the robot’s physical (0.00) to virtual (1.00) teammate presence to a granularity of two decimal points (e.g., 0.34) and were able to see and check the exact value they input. This measure was used to gauge where Kuri was perceived as a teammate on the Miligram virtuality continuum [29].

Qualitative coding was performed on the responses to the post-study semi-structured interviews, to assess how emotional and helpful Kuri seemed to the participants. Participants from the Experiment condition were also asked how “attached” the arms felt on Kuri; this question was coded for only those participants (Table I). To construct codes and counts, one research assistant coded for: “How emotional was Kuri?” and “How helpful was Kuri?” without looking at the data from the Experiment condition. Another assistant coded for: “Do the arms seem to be a part of Kuri?” for participants in the Experiment condition. Codes were constructed by reading through interview transcripts and finding ordinal themes. Example quotes for each code are shown in Table I.

For the gesture annotation, we used a similar approach to Marmpena et al. [24]: users annotated each robot gesture on a slider from very negative (-1.00) to very positive (+1.00), in order to measure valence. The slider granularity was to two decimal points (e.g. -0.73) and participants were able to see the precise decimal value they selected.

To test annotator repeatability and ability to distinguish gestures, we conducted an inter-rater reliability test. We were interested in measuring the repeatability of choosing a single person from a generalized population to rate each gesture. To measure inter-rater reliability, we used intraclass correlation with a 2-way random effect model for a single participant against all participants (referred to as “Raters”) among the six gestures (referred to as “Subjects”) to find a measure for absolute agreement among participants. We used Eq. 1 where denotes the number of repeated samples, is the mean square for rows, is the mean square error, is the mean square for columns, and is the number of items tested [21]. We used = 1 as we were interested in the reliability of agreement when choosing a single rater to rate the gestures against all other raters. We used the icc function from the irr package of R (v3.6.0, with parameters “twoway”, “agreement”, “single”. According to Koo et. al [21], poor values , moderate values , good values , and excellent values .


Each gesture also had an open-ended text box where users were asked: “Please describe what you believe gesture X conveys to you” where ‘X’ referred to the gesture number. These textual data were later coded by a research assistant (Table II). Codes were constructed as the most common and relevant words for each gesture. Example quotes for each code are also included in Table II.

V Results

V-a Participants

A total of 34 participants were recruited and randomly assigned to one of two groups: Control (5F, 12M) and Experimental (8F, 9M). Participants were University of Southern California students with an age range of 18-28 ().

V-B Arms Vs. No Arms Condition

For the math puzzles, we analyzed our performance metric but saw no statistically significant effect between conditions. An independent-samples t-test was conducted to compare

Task Efficiency between the two experiment conditions. There was not significant difference in scores for arms () and no arms () conditions (). There were an equal number of participants (6) in each group who timed out at 10 minutes.

We saw no significant effect among each metric, as seen in Fig. 5. Mann-Whitney tests indicated no significant increases in Total Social Presence between arms and no arms conditions (), Attitude Towards Technology between arms and no arms conditions (), Perceived Ease of Use between arms and no arms conditions (), and Perceived Usefulness between arms and no arms conditions (). Qualitative coding for interviews can be found in Table I. An explanation of the qualitative coding used for the interviews is found in Section IV-D.

Fig. 5:

No statistical significance found for subjective measures. Boxes indicate 25% (Bot), 50% (Mid), and 75% (Top) percentiles. Notches indicate the 95% confidence interval about the median calculated with bootstrapping 1,000 particles

[12]. Thus notches can extend over the percentiles and give a “flipped” appearance (e.g., {Attitude, NoArms}).
Code No Arms Arms Quote
Not Emotional 7 5 “I didn’t feel any emotion from the robot”
Close to Emotional 9 7 “Like not so emotional because the task was not based on the emotion”
Emotional 1 4 “It can talk and tell different emotions when I answer questions differently”
Very Emotional 0 1 “When it went like *crosses arms* it was like ‘come on you’re not helping me here.’ And when her *acts out cheering*, yeah I would say very”
Not Helpful 6 2 “No”
Somewhat Helpful 2 3 “Sort of, yeah”
Helpful 9 12 “I like the way it had the visual feedback when I get right or wrong, and I just feel like it could reinforce it."
Are Arms a Part of Kuri? - Arms Count Quote
No - 2 “They seemed pretty detached”
Somewhat - 4 “When it was pointing things it did seem like it a little bit”
Mostly - 3 “I would say 60 percent”, “8/10”
Yes - 8 “What gave me the most information was her arms”
TABLE I: Qualitative Interview Coding

Most participants answered towards the ends of the physical-to-virtual teammate scale, with very few near the middle (Fig. 6). Consequently, we divided the participants into two groups: “Physical Teammate” (ratings , ) and “Virtual Teammate” (ratings , ) (Fig. 8) and performed a Chi-Square Independence test. A significant interaction was found (). Participants in the Experiment condition, who experienced the arms, were more likely () to rate Kuri as a physical teammate than participants in the Control condition, who did not experience the arms (). Next, we performed post-hoc analyses on subjective measures with the physical and virtual teammate binned groups.

Fig. 6: Stacked histogram with clustering to the left and right of 0.5 rating.
Fig. 7: Significant increases for the first 3 measures with a marginally significant increase for measure 4. See Fig. 5 for notch box-plot explanation.
Fig. 8: Participants in the Experiment condition were more likely to rate the mixed reality robot as physical.

V-C Physical Vs. Virtual Teammate Bins

We analyzed our survey data with regard to the two bins and saw significant effects among metrics (Fig. 7). Mann-Whitney tests indicated a significant increase in Total Social Presence between physical and virtual groups (), Attitude Towards Technology between physical and virtual groups (), and Perceived Ease of Use between physical and virtual groups (). We found only a marginal significant increase for Perceived Usefulness between physical and virtual groups ().

Gesture Code : Count Example Quote
G1: Facepalm Disappointment :10, Frustration: 4, Facepalm: 3 “Facepalm, the robot is frustrated/disappointed”
G2: Cheer Happy: 8, Celebration: 7, Cheer: 6 “That you got the answer correct and the robot is cheering you on”
G3: Shrug Don’t Know Answer: 11, Shrug: 5, Confuse: 4 “Shrugging, he doesn’t know what the person is doing or is disappointed in the false guess”
G4: Arm Cross Angry: 8, Disappointment: 7, Arm Cross: 4 “Crossing arms. ‘Really??’ mild exasperation or judgment.”
G5: Clap Happy: 13, Clapping: 7, Excited: 4 “It’s a very happy, innocent clap. I like the way its eyes squint, gives it a real feeling of joy.”
G6: Wave Dance Happy: 11, Celebrate: 4, Good Job: 4 “Celebration dance, good job!”
TABLE II: Gesture Description Qualitative Code Counts
G1 G2 G3 G4 G5 G6
-0.85 0.72 -0.59 -0.87 0.64 0.55
-0.69 0.88 -0.30 -0.58 0.84 0.76
-0.41 1.00 -0.09 -0.36 1.00 1.00
TABLE III: Valence Rating Percentiles by Gesture.
Fig. 9: Distribution on ability to differentiate gesture valence.

V-D Gesture Validation

We analyzed the data from gesture annotation in order to validate participants’ ability to distinguish the valence of gestures and consistency in interpreting gestures. As seen in Fig. 9 and Table III, participants could distinguish the valence (negativity to positivity) of the gestures. The two-way, agreement intraclass correlation for a single rater, described in Section IV-D, resulted in a score = 0.77 with 95 confidence interval 0.55-0.95, and = 125, , which constitutes moderate to good reliability. Qualitative data are summarized in Table II. Explanation for coding these data can be found in Section IV-D.

Vi Discussion

The arms vs. no arms conditions did not show statistical significance for Task Efficiency nor for subjective measures. However, the two conditions were highly correlated with user perception of either a physical or virtual teammate as binned categories. We postulate that participants may have associated arms in general with more physical tasks, such as picking up objects or pointing. The Experiment condition also involved more overall movement, which may have conveyed Kuri as more of a physical teammate to participants who may have associated movement with physicality.

The binned subjective results suggest a "better" teammate for a physically associated mixed reality robot. This is consistent with the evidence for the importance of embodiment for social presence [10]. Although having arms strongly correlated with the perception of a physical teammate, there could be other factors that influenced physical presence. Future mixed reality robot research may explore factors that increase physical presence of the overall agent to ground mixed robot abilities such as the increased social expressive range of gestures discussed in this work. Given the flexibility and the lack of physical constraints of the MR arm interface, new gestures and actions could be added and adapted to other scenarios, as explored in some previous work in AR-HRI [38, 40].

Gestures were distinguishable on a valence scale with a very high agreement reliability for the scoring among participants, as seen in the intervals reported in Table III. This suggests gesture annotation is highly repeatable. Users had slightly more difficulty in rating the negative gestures than positive ones. Two participants also rated all gestures as positive, as indicated by the 6 erroneous marks for the 3 negative gestures in Fig. 9. These data were included in all reported statistics and may indicate a confusion in the rating scale. The qualitative data (Table II) also support the distinguishability of the gestures as intended (i.e., indicating it was a “clapping” gesture).

Vii Limitations and Future Work

The first generation Hololens posed many issues. As reported by participants, the virtual arms were difficult to see due to the limited field of view of the AR/VR headset. Hand tracking could be used to supplement direct, intuitive, and safe interaction with mixed reality extensions to robots. For example, the mixed reality arms in this study could be given the ability to share a high-five gesture with a study participant. Eye tracking, which has been used for modeling engagement [32] and joint attention [39], can also provide real time input towards autonomous control for prompting users or to gesture to points of interest.

The reported study consisted of a limited number of gestures and limited measures of those gestures. Although results showed that the gestures were distinguishable on a scale for valence and their intended meaning, we did not have subjects annotate for arousal. The two-factor scale of valence and arousal is commonly used as a metric of measuring affect and has been used previously to rate robot gestures [24]. Measuring arousal or a higher feature scale for gestures could provide further insight into their perceived meaning and into how they differ from physical gestures.

Many study participants reported in their post-study interviews that Kuri’s interactions were limited. Reports of Kuri being seen as a “referee” suggest that the robot is seen as closer to a judge than a peer or teammate. Adaptive robot characters could potentially leverage the unique modalities, constraints, and data ARHMD provide in order to more adaptively meet user preferences, leveraging the socially expressive capabilities of the mixed reality arms.

Viii Conclusion

This work explored the use of mixed reality arms for increasing the range of social expressivity for low-expressivity robots. Integration of the expressive modality of mixed reality has the potential to increase a robot’s expressive range as well as increase its physical presence. Future work may explore increasing the expressive range while also leveraging real-time data (e.g., eye gaze) to estimate user engagement and express appropriate mixed reality robot responses. This could lead toward more fluid, expressive, and effective human-robot interaction.

Ix Acknowledgment

We would like to thank Stefanos Nikolaidis, Matthew Reuben, and Jessica Lupanow for all of their assistance.


  • [1] E. Balit, D. Vaufreydaz, and P. Reignier (2018) PEAR: prototyping expressive animated robots-a framework for social robot prototyping. In HUCAPP 2018-2nd International Conference on Human Computer Interaction Theory and Applications, pp. 1. Cited by: §I.
  • [2] K. Baraka, S. Rosenthal, and M. Veloso (2016) Enhancing human understanding of a mobile robot’s state and actions using expressive lights. In Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on, pp. 652–657. Cited by: §II-B.
  • [3] T. Belpaeme, P. Baxter, R. Read, R. Wood, H. Cuayáhuitl, B. Kiefer, S. Racioppa, I. Kruijff-Korbayová, G. Athanasopoulos, V. Enescu, et al. (2013) Multimodal child-robot interaction: building social bonds. Journal of Human-Robot Interaction 1 (2), pp. 33–53. Cited by: §II-B.
  • [4] Dr. M. Bischoff (2018) Ros#. GitHub. Note: Cited by: §II-A, §III-A.
  • [5] M. Bretan, G. Hoffman, and G. Weinberg (2015) Emotionally expressive dynamic physical behaviors in robots. International Journal of Human-Computer Studies 78, pp. 1–16. Cited by: §I, §II-B.
  • [6] E. Cha, Y. Kim, T. Fong, M. J. Mataric, et al. (2018) A survey of nonverbal signaling methods for non-humanoid robots. Foundations and Trends® in Robotics 6 (4), pp. 211–323. Cited by: §I, §I, §II-B, §II-B.
  • [7] V. Charisi, S. Sabanovic, S. Thill, E. Gomez, K. Nakamura, and R. Gomez (2019) Expressivity for sustained human-robot interaction. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 675–676. Cited by: §I, §I.
  • [8] C. Chen, O. G. Garrod, J. Zhan, J. Beskow, P. G. Schyns, and R. E. Jack (2018) Reverse engineering psychologically valid facial expressions of emotion into social robots. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 448–452. Cited by: §I, §II-B.
  • [9] C. Clabaugh, K. Tsiakas, and M. Mataric (2017) Predicting preschool mathematics performance of children with a socially assistive robot tutor. In Proceedings of the Synergies between Learning and Interaction Workshop@ IROS, Vancouver, BC, Canada, pp. 24–28. Cited by: §I, §II-B, §III-B.
  • [10] E. Deng, B. Mutlu, M. J. Mataric, et al. (2019) Embodiment in socially interactive robots. Foundations and Trends® in Robotics 7 (4), pp. 251–356. Cited by: §I, §VI.
  • [11] M. Dragone, T. Holz, and G. M. O’Hare (2006) Mixing robotic realities. In Proceedings of the 11th international conference on Intelligent user interfaces, pp. 261–263. Cited by: §II-A.
  • [12] B. Efron and R. Tibshirani (1986)

    Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy

    Statistical science, pp. 54–75. Cited by: Fig. 5.
  • [13] (2012) Enhancing interaction through exaggerated motion synthesis. Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction - HRI ’12. External Links: Link Cited by: §III-B.
  • [14] M. Gillen, J. Loyall, K. Usbeck, K. Hanlon, A. Scally, J. Sterling, R. Newkirk, and R. Kohler (2012) Beyond line-of-sight information dissemination for force protection. In MILITARY COMMUNICATIONS CONFERENCE, 2012-MILCOM 2012, pp. 1–6. Cited by: §II-A.
  • [15] S. Hashimoto, A. Ishida, M. Inami, and T. Igarashi (2011) Touchme: an augmented reality based remote robot manipulation. In 21st Int. Conf. on Artificial Reality and Telexistence, Proc. of ICAT2011, Cited by: §II-A.
  • [16] H. Hedayati, M. Walker, and D. Szafir (2018) Improving collocated robot teleoperation with augmented reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 78–86. Cited by: §II-A.
  • [17] M. Heerink, B. Kröse, V. Evers, and B. Wielinga (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. International journal of social robotics 2 (4), pp. 361–375. Cited by: §IV-D, §IV.
  • [18] T. Holz, M. Dragone, and G. M. O’Hare (2009) Where robots and virtual agents meet. International Journal of Social Robotics 1 (1), pp. 83–93. Cited by: §II-A.
  • [19] F. Jimenez, T. Yoshikawa, T. Furuhashi, and M. Kanoh (2015) Learning effect of collaborative learning between human and robot having emotion expression model. In Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on, pp. 474–479. Cited by: §II-B.
  • [20] J. Kędzierski, R. Muszyński, C. Zoll, A. Oleksy, and M. Frontkiewicz (2013) EMYS—emotive head of a social robot. International Journal of Social Robotics 5 (2), pp. 237–249. Cited by: §I, §II-B.
  • [21] T. K. Koo and M. Y. Li (2016) A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine 15 (2), pp. 155–163. Cited by: §IV-D.
  • [22] D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati (2012) The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 34. Cited by: §III-B.
  • [23] J. I. Lipton, A. J. Fay, and D. Rus (2018) Baxter’s homunculus: virtual reality spaces for teleoperation in manufacturing. IEEE Robotics and Automation Letters 3 (1), pp. 179–186. Cited by: §II-A.
  • [24] M. Marmpena, A. Lim, and T. S. Dahl How does the robot feel? perception of valence and arousal in emotional body language. Paladyn, Journal of Behavioral Robotics 9 (1), pp. 168–182. Cited by: §IV-D, §IV, §VII.
  • [25] N. Martelaro, V. C. Nneji, W. Ju, and P. Hinds (2016) Tell me more: designing hri to encourage more trust, disclosure, and companionship. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 181–188. Cited by: §I, §II-B.
  • [26] M. J. Matarić, J. Eriksson, D. J. Feil-Seifer, and C. J. Winstein (2007) Socially assistive robotics for post-stroke rehabilitation. Journal of NeuroEngineering and Rehabilitation 4 (1), pp. 5. Cited by: §I.
  • [27] D. McNeill (1992) Hand and mind: what gestures reveal about thought. University of Chicago press. Cited by: §III-B.
  • [28] A. Meghdari, M. Alemi, A. G. Pour, and A. Taheri (2016) Spontaneous human-robot emotional interaction through facial expressions. In International Conference on Social Robotics, pp. 351–361. Cited by: §I, §II-B.
  • [29] P. Milgram, A. Rastogi, and J. J. Grodski (1995) Telerobotic control using augmented reality. In Robot and Human Communication, 1995. RO-MAN’95 TOKYO, Proceedings., 4th IEEE International Workshop on, pp. 21–29. Cited by: §II-A, §IV-D.
  • [30] I. R. Nourbakhsh, J. Bobenage, S. Grange, R. Lutz, R. Meyer, and A. Soto (1999) An affective mobile robot educator with a full-time job. Artificial intelligence 114 (1-2), pp. 95–124. Cited by: §II-B.
  • [31] C. S. Oh, J. N. Bailenson, and G. F. Welch (2018) A systematic review of social presence: definition, antecedents, and implications. Front. Robot. AI 5: 114. doi: 10.3389/frobt. Cited by: §II-B.
  • [32] C. Rich, B. Ponsler, A. Holroyd, and C. L. Sidner (2010) Recognizing engagement in human-robot interaction. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 375–382. Cited by: §VII.
  • [33] B. Scassellati, L. Boccanfuso, C. Huang, M. Mademtzi, M. Qin, N. Salomons, P. Ventola, and F. Shic (2018) Improving social skills in children with asd using a long-term, in-home social robot. Science Robotics 3 (21), pp. eaat7544. Cited by: §III-B.
  • [34] L. Takayama, D. Dooley, and W. Ju (2011) Expressing thought. Proceedings of the 6th international conference on Human-robot interaction - HRI ’11. External Links: Link Cited by: §III-B.
  • [35] (2012) The illusion of robotic life. Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction - HRI ’12. External Links: Link Cited by: §III-B.
  • [36] F. Thomas, O. Johnston, and F. Thomas (1995) The illusion of life: disney animation. Hyperion New York. Cited by: §III-B.
  • [37] M. Tomasello (2010) Origins of human communication. MIT press. Cited by: §III-B.
  • [38] M. Walker, H. Hedayati, J. Lee, and D. Szafir (2018) Communicating robot motion intent with augmented reality. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 316–324. Cited by: §I, §II-A, §II-A, §IV, §VI.
  • [39] Z. E. Warren, Z. Zheng, A. R. Swanson, E. Bekele, L. Zhang, J. A. Crittendon, A. F. Weitlauf, and N. Sarkar (2015) Can robotic interaction improve joint attention skills?. Journal of autism and developmental disorders 45 (11), pp. 3726–3734. Cited by: §VII.
  • [40] T. Williams, D. Szafir, T. Chakraborti, and E. Phillips (2019) Virtual, augmented, and mixed reality for human-robot interaction (vam-hri). In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 671–672. Cited by: §I, §II-A, §II-A, §III-B, §VI.
  • [41] T. Williams, N. Tran, J. Rands, and N. T. Dantam (2018) Augmented, mixed, and virtual reality enabling of robot deixis. In International Conference on Virtual, Augmented and Mixed Reality, pp. 257–275. Cited by: §II-A.
  • [42] J. E. Young, M. Xin, and E. Sharlin (2007) Robot expressionism through cartooning. In 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 309–316. Cited by: §II-A.
  • [43] T. Zhang, Z. McCarthy, O. Jowl, D. Lee, X. Chen, K. Goldberg, and P. Abbeel (2018)

    Deep imitation learning for complex manipulation tasks from virtual reality teleoperation

    In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. Cited by: §II-A.