Embodiment in Socially Interactive Robots

12/01/2019 ∙ by Eric Deng, et al. ∙ University of Wisconsin-Madison University of Southern California 0

Physical embodiment is a required component for robots that are structurally coupled with their real-world environments. However, most socially interactive robots do not need to physically interact with their environments in order to perform their tasks. When and why should embodied robots be used instead of simpler and cheaper virtual agents? This paper reviews the existing work that explores the role of physical embodiment in socially interactive robots. This class consists of robots that are not only capable of engaging in social interaction with humans, but are using primarily their social capabilities to perform their desired functions. Socially interactive robots provide entertainment, information, and/or assistance; this last category is typically encompassed by socially assistive robotics. In all cases, such robots can achieve their primary functions without performing functional physical work. To comprehensively evaluate the existing body of work on embodiment, we first review work from established related fields including psychology, philosophy, and sociology. We then systematically review 65 studies evaluating aspects of embodiment published from 2003 to 2017 in major peer-reviewed robotics publication venues. We examine relevant aspects of the selected studies, focusing on the embodiments compared, tasks evaluated, social roles of robots, and measurements. We introduce three taxonomies for the types of robot embodiment, robot social roles, and human-robot tasks. These taxonomies are used to deconstruct the design and interaction spaces of socially interactive robots and facilitate analysis and discussion of the reviewed studies. We use this newly-defined methodology to critically discuss existing works, revealing topics within embodiment research for social interaction, assistive robotics, and service robotics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 16

page 17

page 19

page 21

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

2.1 Embodiment in Philosophy and Ethics

Embodied, or situated, cognition is a concept derived from embodiment in philosophy and ethics, a well-studied area in the humanities that spans topics such as social interaction, social influence, and decision-making [268]. Wilson [306] and Anderson [3] discussed embodied cognition as an approach to examining the human experience being impacted by “aspects of the body beyond the brain.”

In philosophy, cognition is seen as being critically influenced by all aspects of an agent’s body, and the discussion of embodiment in that context is focused on the agent’s sensorimotor capabilities [257]. For example, Wilson and Foglia [307] attributed an agent’s “beyond-the-brain body” as playing a critical role in that agent’s cognitive processes.

Embodiment is closely related to the agent’s various expectations and limitations. All agents are in some way constrained by their embodiment; they are also highly dependent on affordances, “the fundamental properties of a device that determine its way of use”, which are themselves derived from embodiment [95]. The affordances, expectations, and limitations set by an agent’s embodiment are further discussed in Section 4.

The ethics of embodiment in social interaction relate to these affordances stemming from a robot’s design. Interactive robots are often designed with the goals of being engaging and assistive. The robot’s quality of being engaging aids interaction, but can also potentially lead to undesirable influence, unrealistic expectations, and perceived deception, disappointment, or emotional discomfort. Attachment toward the robot can develop, so that the removal of the robot may lead to grief and anxiety [235]. Misleading embodiment design can also engender inappropriate use that can potentially lead to emotional or physical injury [226].

Classical works in philosophy establish the foundation for embodiment in general, including robot embodiment, and ethics further warns of negative consequences of some design choices.

2.2 Embodiment in Psychology and Communication

Scholars in the fields of psychology and human communications have long pondered the question of how and to what extent different media can be used to represent the real world. A significant body of literature discusses virtual reality [202, 263], perceived reality [140, 131], pictorial realism [303], and other related topics. Increasingly, communications researchers are becoming interested in presence and its relationship to embodiment [28, 186, 177]. Effective design of an embodied robot serves to increase its social presence and desired affordances.

Recent embodiment research in human communications fields has focused on presence in virtual reality platforms [151] and telepresence [149, 74], building on the classical works [105, 104]. There have also been further explorations of physical embodiment in social agents [139, 172, 146]. In the next section, we discuss robotics embodiment studies whose results support the importance of social presence in both human-human and human-robot interaction.

2.3 Embodiment in Robotics and Design

Research in artificial software agents encompasses virtual agents, relatable agents, affective agents, and most recently chatbots, and has been focused on the development of communicative systems that are physically disembodied, such as text interfaces [180], animations [20], or high-fidelity virtual characters [70]. The value of physical embodiment of artificial agents comes from the improvements seen in the interactions held between such agents and their human interaction partners. Two basic questions arise: (1) do physically embodied agents interact more effectively than their non-physically embodied counterparts? and (2) if so, why?

Rosch et al. [257] discussed the influence that sensorimotor capabilities have on an agent’s relationship with its environment–providing a richer experience for the agent and allowing the agent to exist in a richer context that bridges biology, psychology, and culture. Brooks [40] drew parallels from this philosophical generalization of embodiment to the field of robotics. The sensorimotor capabilities of biological beings and robots, at a high level, affect the agent in very similar ways: the sensors and effectors define limitations to the ability of the agent to sense, manipulate, and navigate its environments. Biological agents have brains, muscles, and nerves that communicate on a network that enables the system to function. When discussing the embodiment, or “physical instantiation”, of robots, we focus on the bodily presence of those machines. This includes the internal and external mechanical structures, embedded sensors, and motors that allow them to interact with the world around them [40, 42]. All components of embodiment are inherently tied to the agent’s function, whether the agent is biological or artificial.

Traditionally, roboticists were largely focused on the functional properties of physical embodiment, such as locomotion [91, 41, 301], manipulation [188, 254], and haptics [166, 90, 206]. Human-robot interaction (HRI) is a relatively new and rapidly growing area of robotics that focuses on interaction with people in a broad variety of settings, and fundamentally changes how value is attributed to different components in robotic systems [165, 75, 278]. For instance, in HRI, the value of a gripper goes beyond its capabilities for manipulation to its role in communication: having independently-controlled fingers allows a robot hand to gesture in more complex ways and therefore opens doors to a broader realm of interactions. The value associated with socially interactive capabilities has stimulated new robot embodiments that are not capable of traditional functions (such as Pepper, Kiwi, and Cozmo), shown in Figure 2.1).

Figure 2.1: Embodied Socially Interactive Robot Platforms–(a) Softbank Pepper, (b) Spritebot Kiwi, (c) Anki Cozmo

Designing for interaction rather than physical function fundamentally changes the nature of robot design. In Section 2, we provide a characterization of this new design space. As a first step, we define embodiment in the context of socially interactive robots.

Defining Embodiment for Interactive Agents

Embodiment in the context of artificial social agents has been a topic of discussion since the late 1900s–Zlatev [313] explored situated embodiment, Sharkey and Ziemke [269] studied mechanistic and phenomenal embodiment, Ziemke [311] addressed natural embodiment, and Barsalou et al. [18] discussed the concept of social embodiment, among many others.

Ziemke [312] introduced six different “notions” of embodiment:

  1. Structural Coupling: The physical coupling between the agent and its environment, based on the work of Maturana and Varela [191, 190]. Quick et al. [247] provided a definition of embodiment related to structural coupling:

    System X is embodied in an environment E if perturbatory channels exist between the two. That means, X is embodied in E if for every time t at which both X and E exist, some subset of E’s possible states with respect to X have the capacity to perturb X’s state, and some subset of X’s possible states with respect to E have the capacity to perturb E’s state.

  2. Historical Embodiment: The inherent relationships of any agent’s embodiment with its history, especially in the context of adaptation, evolution, and growth [289, 253, 311].

  3. Physical Embodiment: The physical instantiation of an agent in its environment, adapted into the concept of “physical grounding” [42] which argues that “it is necessary to connect [intelligent systems] to the world via a set of sensors and actuators.”

  4. Organismoid Embodiment: The notion that cognition in an embodied artificial agent is, to some degree, dependent on its similarities to organismic counterparts.

  5. Organismic Embodiment: The concept that “cognition is not only limited to physical, organism-like bodies, but in fact to organisms, i.e., living bodies” [312].

  6. Social Embodiment: The idea that the embodiment of a socially interactive agent plays a significant role in social interactions. Barsalou et al. [18] described social embodiment as “states of the body, such as postures, arm movements, and facial expressions, arise during social interaction and play central roles in social information processing.” This is the notion of embodiment more relevant to the work in this paper, as it relates most to physical embodiment of socially interactive robots.

Quick et al. [247] discussed embodiment in the context of structural coupling, addressing how embodiment is presented independent of any ontological context. This notion of embodiment, inspired by the interactions of Eschericha coli (E. coli) and its environment, is most concerned with the structural or physical relationships between the agent and its surrounding environment.

This work focuses on how the physical relationship between a socially interactive robot and its surrounding environment relate to the robot’s sociability and presence. We adhere to the definition of embodiment that is a combination of the concepts of “social embodiment” and “situated structural coupling” from Ziemke [312] and Quick et al. [247], respectively.

In the following sections, we review the work related to embodiment in research areas outside of robotics, and discuss how they relate to the design and implementation of physical embodiment of socially interactive robots.

2.3.1 Virtual Artificial Agents

Virtual artificial agents generally fall into one of two categories: (1) immersed virtual reality or (2) on-screen virtual characters [118]

. Virtual artificial agents and socially interactive robotics share several enabling technologies, including machine vision, speech, AI, and machine learning

[102]. They also share related theoretical grounding, including psychology and sociology theories; Persson et al. [241] presented a user-centered viewpoint of socially interactive agents, research that aims not to simulate social intelligence but to give the impression of the agent being socially intelligent. Taylor [285] explored presence, social integration, and communication in virtual worlds with secondary characters–all critical aspects of embodiment.

The research in virtual agents has shown a significant need for embodiment in virtual social interactions [259]. Many studies supplement qualitative interviews and observer notes with quantitative data [29, 276, 65], such as toward understanding ownership of sub-components of embodiment[148] and administering POMS questionnaire before and after completing an activity[93]. Virtual agents have been shown in research experiments to be engaging to a variety of populations [50, 102], and have been developed for applications in education [295, 288], collaboration [240, 291, 251, 292], social skill training [53], and post-traumatic stress disorder [255] and depression therapy [70].

Work in virtual agents has repeatedly demonstrated the positive effects of embodied cues, such as gestures and expression [265], in maintaining user engagement in both short-term and long-term interactions [154, 27, 26]. Such embodied cues are shared aspects of virtual agents and socially interactive robots and have been shown to be transferable [229].

2.3.2 Collaborative Robots

Virtual agents can enable copresence, but physical robots enable colocation, which, in turn, can enable collaboration. As HRI expands, physical collaboration between people and machines is a major target of research and applications, ranging from manufacturing to the service sector.

Robots were originally envisioned for performing the three “Ds”: dirty, dull, and dangerous work [212]. One of the first uses of robots at scale was in manufacturing and automation. Because of the predictability and repeatability of tasks on the assembly line, robots were designed for and placed in environments not accessed by human workers, or were caged for safety. Recent research and technological advancements have enabled the development of robot systems and control algorithms for deployment in manufacturing settings where people and robots share the same environment and work together to accomplish common goals [220]. This development initially focused on intuitive interfaces and communication tools for master-slave relationships between operators and robots in teleoperational situations [279, 184], but such systems required trained professionals to operate them and had marginal impact on the efficiency and safety of industrial workplaces.

To allow for less trained users to effectively interact with and leverage industrial robot systems, HRI researchers have been working on various approaches to human-robot collaboration, such as using cross-training to improve task sharing between human and robot workers [220], planning shared work plans taking human ergonomics into consideration [236], and adapting robot actions to human motion, availability, adaptability, and intent [127, 160, 129, 221].

2.3.3 Service and Socially Interactive Robots

Concurrently with collaborative robotics, the broad area of service robotics has been growing rapidly, developing robots that can provide services in everyday life, such as vacuuming and cleaning floors [132, 86, 215], folding laundry [231, 185], delivering packages [274, 57], giving museum tours [227], driving autonomously [170], and providing aid to special needs populations in the context of socially assistive robotics [82, 23, 38], along with numerous other uses.

As robots move from cages and from behind closed doors into shared spaces with humans, is has become critical to integrate social capabilities and new design considerations into the embodiments of those systems. Some considerations are related to safety, such as hiding pinch points and adding in physical compliance, and others to practical usability, such as height adjustment [106, 309, 189].

Similar to collaborative robots, socially interactive robots also need to be designed to be minimally intrusive, but their embodiments are used as tools for communication, acceptance, and engagement. These robots primarily interact through their social capabilities in order to achieve their goals. Accordingly, they must be able to both perceive [248, 145, 22, 265] and generate communicative signals [229, 126] that their human counterparts are able to intuitively understand, relate to, and accept. These requirements mean a fundamental change in the way robot embodiments are designed.

Social Performance and Social Presence in Embodied Robots

The combined ability of an artificial agent to generate and understand verbal and non-verbal communication can be organized into the following seven human social characteristics that can greatly improve a robot’s social acceptance [85]:

  1. Express emotion

  2. Communicate with high-level dialogue

  3. Learn/recognize models of other agents

  4. Establish/maintain social relationships

  5. Use natural cues (gaze, gestures, etc.)

  6. Exhibit distinctive personality and character

  7. Learn/develop social competencies

These components of social interaction tie into the concept that Lee et al. [165] referred to as social presence, a key component in the success of social interactions. Studies have shown that physically-embodied agents possess social presence to a greater extent than their virtual counterparts [272, 114].

There are a few different definitions of social presence across related research communities (HCI, communications, etc.); we adhere to the definition by Bainbridge et al. [15] that defines social presence as “the degree to which a person’s perceptions of an agent or robot shape social interaction with that robot”. This concept is then broken down into two classes of design: “embodiment” and “co-location”. Each class has attributes for creating rich, social interactions; in this paper we focus on exploring the physical embodiment of artificial agents.

The embodiment hypothesis in socially interactive robotics [296] argues that a robot’s physical presence augments its ability to generate rich communication. The physical embodiment of social agents provides them with more modes of communication that can be used to convey internal states and intentions in more intuitive, human-like ways [176]. Barsalou et al. [18] concisely outlined four significant ways in which physical embodiment directly effects the social capabilities of these interactive systems:

First, perceived social stimuli do not just produce cognitive states, they produce bodily states as well. Second, perceiving bodily states in others produces bodily mimicry in the self. Third, bodily states in the self produce affective states. Fourth, the compatibility of bodily states and cognitive states modulates performance effectiveness.

Types of Socially Interactive Robots

Socially interactive robots vary in many aspects of embodiment and social ability. They can be classified into seven categories according to

Fong et al. [85], expanding on Breazeal [33]:

  1. Socially Evocative: Robots that evoke feelings stemming from the natural human tendency to nurture and care for anthropomorphized agents.

  2. Social Interface: Robots that use social cues and communication modalities familiar to human users. This requires embodiments to be capable of generating (and often also understanding) those social cues.

  3. Socially Receptive: Robots that are socially passive but benefit through interaction. They are limited in the social cues they are capable of learning by their respective embodiments.

  4. Sociable: Robots that proactively interact with humans to complete internal goals.

  5. Socially Situated: Robots in a social environment that they are capable of understanding and reacting to [66].

  6. Socially Embedded: Robots that are socially situated but also structurally coupled with their environment and have knowledge of human interactional structures.

  7. Socially Intelligent: Robots that have human-level social intellect. This is be the most complex and technologically-capable class of socially interactive robots.

2.4 Summary

In this section we discussed the bodies of work surrounding the concept of embodiment that are relevant to the field of socially interactive robots. Embodiment has been studied by a wide variety of disciplines. Philosophers have examined embodiment as a lens to the human experience, studied its relationship to human cognition, and discussed how it serves as a source for both physical and cognitive human social expectations. Psychologists and communications theorists have long been fascinated by the notion of presence and how symbolic representation of agents can be appropriately designed. Embodiment is inherently contextual; consequently, the latest developments in communication technology and media, such as virtual reality and on-screen characters, have had a strong influence on recent studies. Since human-robot interaction is a relatively young are of robotics, the value of embodiment in social HRI is also a relatively new area of study. Traditionally, the physical embodiment of robots has been discussed in the context of functional value–perception, mobility, and manipulation. We discussed how the field has now advanced to include considerations of interactive value and design affordances. We then discussed the related fields of virtual agents, collaborative robots, and service robots, all of which have relevant aspects of embodiment. Finally, we introduced the embodiment hypothesis that is fundamental to human-robot interaction.

Researchers of embodiment have various opinions on its role in an artificial agents’ abilities–some see limited benefits from physical embodiment [121] while others claim that “intelligence cannot merely exist in the form of an abstract algorithm but requires a physical instantiation, a body” [242]. In the next section, we review a set of embodiment studies conducted in socially interactive robotics over the last decade and a half, in order to evaluate the state of the embodiment hypothesis today.

3.1 Contextual Factors

Figure 3.1: McGrath’s [194] Circumplex of Group Tasks, segmented into octants along the dimensions of generate-negotiate and execute-choose.

Interaction, whether between humans or between humans and robots, is always shaped by context. In the following subsections, we introduce a taxonomy for the two core contextual factors of interaction with socially interactive robots: the tasks in which robots are used and the roles that the robots play in those tasks.

3.1.1 Tasks

The first of the two dimensions of social context is the task at hand. Using the Circumplex Model for group tasks proposed by Mcgrath [194] (seen in Figure 3.1), we classify the reviewed studies into one of eight octants along two main dimensions of (1) generate-negotiate and (2) execute-choose. The task at hand acts as the underlying driver of these interactions and is the more general of the two contextual factors being considered. Below, we define these eight task categories and provide example studies of each from our review.

  • Planning (Generate-Execute): Planning tasks are those in which a series of steps is determined by the interacting agents in order to reach a goal for the group. For example, Vossen et al. [293] compared the influence of feedback relative to energy consumption used by an embodied robot and a computer. Users were asked to use a simulated washing machine interface to clean clothes while trying to minimize electricity consumption.

  • Performances/Action (Execute-Generate): Performance or action tasks are those in which some or all members of the interaction group execute a series of actions, typically following a set of predetermined instructions, to achieve a goal. The action(s) taken depend on the task context, but the tasks share the characteristic of having quantifiable performance metrics. For example, Bainbridge et al. [15] evaluated the use of a physically or virtually embodied artificial agent in instructing participants to perform tasks that ranged from moving stacks of books from one shelf to another to discarding stacks of books by placing them into a trash can.

  • Contests/Competition (Execute-Negotiate): Contest tasks involve conflicts of power between interaction agents in action-based, competitive tasks [243]. The competitive components involve negative-sum or zero-sum games [219] and thus associate negative cost (both social and functional) with task-related decisions. For example, Bartneck [19] asked participants to compete against robot agents in a negotiation task involving stamps that were assigned values prior to the start of the game. Both the robot and the participant were trying to maximize their individual scores and could negotiate and trade with one another throughout the activity.

  • Mixed-motive (Negotiate-Execute): Mixed motive tasks involve resolving conflicts of interest among interacting agents [243]. Such conflicts are structured as positive sum games, in which the net benefits received by an individual party do not necessarily detract from the benefits of another [219]. For example, Shinozawa et al. [272] explored the use of robotic agents in retail settings with relevant social goals such as conversing with participants about purchasing a set of kitchen knives.

  • Cognitive conflict (negotiate-choose): Cognitive conflict tasks involve resolving conflicts of viewpoints among interacting agents [243]. For example, Pereira et al. [239] used the iCat robot to play chess with participants, starting from a predetermined mid-game position with the participant at a slight advantage.

  • Decision Making (Choose-Negotiate): Decision-making tasks are those in which interacting agents decide issues with no unique correct answer [243]. For example, Lee et al. [162] asked participants to rate the “genuineness” of smiles in artificial agents and robots by comparing Duchenne and non-Duchenne smiles.

  • Intellective (Choose-Generate): Intellective tasks are similar to decision making tasks but have correct answers. For example, Zlotowski [314] had participants solve math problems with the robot agent as a medium for feedback related to the task.

  • Creative (generate-choose): Creative tasks involve generating ideas. While painting, composing, and photography are examples of typical creative tasks, in the context of Mcgrath [194] task circumplex, those tasks are classified as performance tasks given that actions are being taken. In contrast, Fischer et al. [84] is an example of a creative task that asked participants to describe objects to the robot that were selected by the experimenter.

3.1.2 Social Roles

The second dimension of context is the role the agent plays in the interaction. Roles are inherently tied to the agent’s abilities to achieve certain contextualized goals, both social and task-oriented. For example, agents in the role of a “superior” may be capable of delivering trustworthy information and gaining adherence because of their perceived reliability and competence [144]; those perceived as “peers” may facilitate interesting and engaging cognitive competition; and “subordinate” agents may improve user self-efficacy and encourage attachment formation through a balance of demonstrated ability and disclosed incompetence [19]. Understanding how people respond to agents of varying social roles is critical for designing socially interactive robots. To discuss these roles, we look to classical works in organizational theory [183], plotting roles of agents along the spectrum that spans from subordinate to superior (Figure 3.2).

Figure 3.2: Examples of robot applications in different social roles, including a subordinate mobile base following remote controls, a peer eating “buddy” for children, and a superior robot “instructor” that gives the user task directions.

3.2 Design Paradigms

The second defining feature of socially interactive agents is the design of their embodiments, or their industrial design. The form that the agent’s embodiment takes — physical, virtual, or disembodied — and the potential benefits of that form are key design considerations. Some researchers have characterized different forms of embodiment along the “weak” to “strong” axis [72]. Mutlu [214] argued that the choice of virtual or physical representation goes beyond a weak vs. strong sense of embodiment to elicit disparate frames of min d and result in vastly different user experiences. Virtual embodiments bring users into the agent’s environment, invite them to participate in a crafted narrative, provide proxemic relationships that are constrained and determined by physical arrangements and conventions, and offer a safe setting to experience emotions. Physical embodiments, on the other hand, are co-situated in the users’ environment, perceived as independent agents pursuing their own goals, and seen as real-world, self-relevant stimuli. Interactions with physical embodiments emerge through joint action and intention, and proxemic relationships with these agents are dynamic and co-managed to follow human norms [197, 198]. Despite these significant differences in the nature of interactions with virtual and physical embodiments, the body of work that we review here considers the form of embodiment to be a design choice and seeks to establish the differences in interaction outcomes through direct comparison.

Figure 3.3: A virtual NAO robot (left) and a physical NAO robot (right), representing the virtual and physical or weak and strong embodiments.

Comparing physically embodied robots to their virtual counterparts (Figure 3.3) to test the value of physical embodiment in artificial agents is a common theme in the reviewed research. However, research on embodiment in socially interactive robotics also attempts to learn about specific design features and methods that may be used to create more engaging and effective robot systems. The design space for the embodiment of robots and virtual agents is vast. Because there are so many features of a robot’s embodiment, the robots and virtual agents used in the reviewed studies vary greatly in their designs. To address the variability of embodiment design, we focus on two dimensions of every robot’s design: (1) design metaphor and (2) level of abstraction, characterized in Figure 3.4.

3.2.1 Design Metaphors

The notion of the design metaphor stems from traditional design fields and refers to the design inspiration of an artifact, or in our case, robot. The metaphor for a robot’s embodiment affords certain expectations for interaction partners and scaffolds social interactions. For instance, a humanoid robot with a mouth is more likely to be expected to speak compared to a bird-like robot with a beak. The design metaphors for socially interactive robots cover a wide range of possibilities, including cats, dogs, people, and cars. Since there is no simple linear relationships between these different metaphors (i.e., the metaphor of a cat is not obviously somewhere between the metaphor for a dog and a human), we define this subset of the robot design space as a discrete, nonlinear space. Because the design of embodiments can be inspired by multiple metaphors, we classify each embodiment by its primary design metaphor and discuss its level of abstraction relative to that singular design metaphor.

Figure 3.4: A characterization of the design of embodiments for artificial agents. Designs follow discrete metaphors but vary along a continuous axis of abstraction.

3.2.2 Abstraction and Stylization

The level of abstraction at which the design metaphor is manifested on the robot’s embodiment defines how known abilities and characteristics from the design metaphor elicit expectations about the robot’s capabilities. An example of differences in abstraction for the same design metaphor can be seen in Figure 3.5; all three robots are inspired by the human form and inherit varied subsets of human embodiment features such as arms, eyes, and mouth. The robot on the left, Kuri, looks much more abstract than the robot in the middle, Bandit, and the robot on the right, Mesmer, is much more human-realistic than the other two. Because of the differences in abstraction of their human-inspired forms, perceptions of these robots will differ and can affect the performance of each robot in different task scenarios.

Figure 3.5: Three example robot embodiments spanning the spectrum of abstraction for the anthropomorphic/human design metaphor: Kuri (left), Bandit (middle), and Engineered Art’s Mesmer (right).

3.3 Behavior Design

Figure 3.6: Some of the behavior design variables for socially interactive robots (adapted from [213]).

Human-robot interaction is grounded in human-human interaction and multi-modal communication patterns. Embodied robots can leverage rich channels of communication that are unavailable to purely text- or speech-based interactive systems. Work in human communication has provided evidence that embodied interaction, when effectively executed, can elicit improved performance in various social, cognitive, and task outcomes [9, 168]. In embodied interaction, agents utilize behavioral mechanisms that encompass both the ability to perform specific behavioral elements and the timing with which these behaviors are used in the context of the interaction. We refer to these behavioral elements as embodied cues [213]. This section provides an overview of embodied cues used by agents in the reviewed studies or explored in human communication, focusing on cues that we believe will be important design variables for creating effective socially interactive robots.

3.3.1 Limb-Based Gestures

A large subset of embodied cues consists of hand, arm, and head movements, which—when used according to the norms of humancommunication—can communicate a wide range of ideas and create rich and salient interactions [156, 192, 68]. Those limb-based gestures fall into five primary categories: iconic, metaphoric, beat, cohesive, and deictic gestures [195].

Iconic gestures are used to communicate ideas directly related to the semantics of the associated speech, while metaphoric gestures are used to communicate more abstract concepts and ideas. Both use “pictorial representations” commonly expressed through hand and arm movements [156]. Iconic gestures range from sign languages, which explicitly and specifically convey assigned semantics of the communicator’s messages, to more general gestures that convey less specific meaning, such as “large.” Beat gestures are related to physical representations of prosody and pace of speech and are often used to emphasize specific segments in speech and to maintain timing and pace during the interaction. These gestures can involve a wide variety of motions, including repetitive hand, arm, head, or full-body movements. Cohesive gestures are used to associate thematically related segments of speech and improve coherence and clarity of speech. Using similar gestures at targeted points during a verbal presentation helps observers to construct relationships between ideas being presented. Deictic gestures, or pointing gestures, are used to provide references and direct attention toward objects in the shared environment. These gestures are performed with arm, hand, or head movements and serve as cues for establishing joint attention in situated interactions.

Figure 3.7: Examples of various limb-based gestures of embodied robots from work by Huang and Mutlu [125].

Head movements commonly serve as deictic, cohesive, and beat gestures. Speakers use them in the form of pointing or directional cues; listeners use them in the form of nods and shakes [213] to signal understanding and agreement, as back-channel cues for attention and uncertainty, and as a means of pacing interactions [67].

3.3.2 Posture

The embodied cues discussed above are explicitly performed using different combinations of embodied features at specific times during an interaction [152, 286]. The overall poses of the agent’s body in its “resting state” are also important, as they provide cues about attitude and status relationships in the interaction [200]. By observing the overall orientation and the kinematic configuration of the agent, researchers have shown significant correlations between posture and speech, allowing prediction of upcoming speech from video [196]. Posture cues, such as the “arms-akimbo position” where a communicator places the hands on the hips and bows the elbows outwardly, convey information about the internal state of the communicator, shaping how others perceive the communicator [232, 213]. Researchers have studied these phenomena and developed systems that enable socially interactive robots to better interpret, and therefore generate, explicit posture cues [94]. These outcomes highlight the importance of posture in the embodiment design space.

Posture, like limb-based gestures, are particularly affected by differences in robot embodiments. Different robot hardware inherently constrains embodied expressive gestures in different ways; mapping semantic gestures across different forms of embodiments is an important open challenge for generalizable expressions for socially interactive robots [300, 287].

3.3.3 Gaze

The gaze cues of an individual, defined by the orientation—and shifts thereof—of the eyes, the head, and the body, convey rich information about the direction of attention and mental and emotional states of the individual [89]. These cues serve a range of social functions, including facilitating turn-taking [73, 218], helping to establish joint attention [80], and signaling the intent and mental states of others [47, 46, 128]. The wide range of functions that gaze serves is due largely to their highly contextualized nature. For example, the aversion of gaze during turn-taking can help speakers to more effectively manage conversational roles, while listeners can use gaze aversion to regulate intimacy and put the speaker at ease [5, 6]. Gaze cues also serve as a supplement to or a replacement for deictic gestures [264], providing speakers with the ability to direct attention toward objects in the environment [89], and to disambiguate what is being referred to in the environment [108, 124]. Through gaze cues, individuals can signal personality [7], mental states [47], and affect [187]. When used effectively, these cues can significantly enhance interaction outcomes, such as improved recall of information [216, 4], management of the conversational floor [5, 6], and efficiency in task collaboration [8]. Finally, how the eyes, the head, and the body are configured affects the perception and outcomes of gaze cues [117, 4, 237], highlighting the complexity of the role of gaze in social perception and the richness of the design space for gaze as an embodied cue in human-machine interaction.

3.3.4 Facial Expressions

Embodied agents have the opportunity to use a variety of facial features and expressions. Facial expressions can appear alongside other embodied cues or as isolated behaviors [97]; they strongly influence how an agent is perceived. While the complexity of faces makes them a rich and expressive channel of communication, this also makes the design of expressions challenging. Inappropriate expressions can result in strongly negative interaction performance, such as eliciting the Uncanny Valley phenomenon [207] that relates high-fidelity realism to the agent’s features to perceived “creepiness”. Furthermore, facial expressions that are incongruent with speech can confuse interaction partners [210].

Figure 3.8: Different emotional expressions on the Spritebot platform inspired by expressions of human emotion.

Social smiles are particularly important in social interaction. They serve as salient back-channel cues that express understanding and agreement, improving conversational efficiency [43] and perceived social competence of the robot [10, 233, 213]. Ill-timed or inappropriate smiles, however, have a strongly negative impact on interaction and can invoke the Uncanny Valley phenomenon.

Facial expressions influence the internal states of both the agent and the observer [78, 261], and emotional expression and interpretation are associated with the activation of specific brain regions [17, 16]. This relationship provides an opportunity for informed design for interaction. Findings by Ekman and Friesen [78] provided abstractions, such as “happy” and “sad,” that serve as the most commonly used foundation for the design of facial expressions for anthropomorphic and zoomorphic robots. Such robots are often designed with faces that are more abstract than the rest of their bodies relative to their design metaphor in order to enable the effective use of facial expressions and to eliminate unnecessary complexity [79]. Zoomorphic robot designs can also utilize human “facial action units,” allowing designers to use human-like facial expressions on animal-like robots to express interpretable emotion [79, 141]. This technique effectively blends animal-like and human-like metaphors as the primary and secondary metaphors, respectively, as people are much less familiar with animal facial expressions. The design of the Spritebot platform is an example of this approach, blending feline and human metaphors in the design of the robot (Figure 3.8) [79].

3.3.5 Proxemics

The positioning of social agents in physical space relative to other interaction partners and objects also acts as a salient embodied cue in social interaction [199]. The distance and orientation of interaction agents provide strong bidirectional signals for perception, intent, and attitude that are especially relevant for the design and implementation of mobile socially interactive robots [211, 198]. Research in human communication has long studied human proxemics, offering a number of models to predict how spatial behaviors affect interaction outcomes [11, 113]. Work in human-robot interaction has provided experimental support for some of these models [211] and has highlighted the importance of proxemic cues in the design of interactive behaviors for physically co-present robots [282, 299]. The design of these cues can drastically change how people perceive robots, e.g., as disruptive and threatening [215] or as accepting and friendly [197], underlining the need for careful consideration of proxemic behavior design.

Figure 3.9: An illustration of proxemics zones suggested by Hall [107].

3.3.6 Social Touch

Physical embodiment presents robots with the opportunity to physically interact with their environments, including their interaction partners. Social touch comprises non-functional touch-based interactions such as hand-holding or touches on the arm, shoulder, and face [133, 92]. In human-human interaction, social touch facilitates development, social connectivity, and emotional support, and helps communicators to establish and maintain engagement throughout interaction [310, 138, 92]. When used according to human social norms, social touch cues can serve as salient signals for dominance, intimacy, immediacy, and trust [201, 204, 44].

3.4 Summary

This section introduced a characterization of the design space for socially interactive robots. Since design spaces have served as transformational tools in many design fields, our goal was to provide designers and researchers such a tool for embodied interactive agents. As socially interactive robots are complex systems, we analyzed their design aspects within three sub-systems chosen to parallel industrial design, interaction design, and animation.

The embodied cues discussed in this section make up the primary elements in the design space of interactive behaviors for socially interactive robots. When designed carefully and used within established social norms, such behaviors can enable rich, engaging, and effective interactions. The next section introduces the metrics used to discuss different facets of interaction performance, reviews results from the surveyed studies, and discusses the implications of those results on the design of the behaviors, embodiments, and interaction strategies of future socially interactive robots.

4.1 Experimental Overview

Our review of prior studies on embodiment in socially interactive robots covers a wide range of applications, user populations, and methodologies. We begin by introducing the set of experiments that we evaluated and discuss the overall landscape of embodiment studies at the time of this review. Of the 65 experiments in our review that compared a physically embodied or strongly embodied agent to a comparable virtually embodied or weakly embodied agent, 50 experiments compared two types of embodiments, 11 experiments compared three types of embodiments, and 4 experiments compared more than three different types of agent embodiment. Of the 65 total experiments, 17 involved more than 60 participants, 24 involved between 30 and 60 participants, and 24 involved fewer than 30 participants (Table A.4). In Figure 4.1, the reviewed experiments are mapped on the task circumplex [194]. The social role of the robot is represented by the distance from the center: the closer to the center, the more subordinate and the further from the center, the more superior.

Figure 4.1: Studies included in our review, overlaid on McGrath’s [194] task circumplex. The distance from the center indicates social role: inward = subordinate, outward = superior.

4.2 Interaction Outcomes and Measures

As discussed in earlier sections, studies of embodiment in the humanities and social sciences predate research on the embodiment of robots and artificial agents. Some of the techniques used by researchers in those fields have been adopted into robotics-related research. Validated observational instruments, such as the POMS survey [93], the semantic-differential scale [277], and selected quantitative techniques from Mosteller et al. [208], have been implemented in a number of studies related to embodiment in robotics [98]. These measurement tools are especially valuable for evaluating the more subjective results of experiments involving socially interactive robots and can provide valuable insight into the state of the embodiment hypothesis. For instance, Lee [164] provided empirical evidence for the mediating role of presence in people’s social responses to synthesized voices; Experiment 1 described by Lee et al. [165] showed that people evaluated both the physically embodied agent and the interaction they had with it more positively and characterized physical embodiment as “an effective tool to increase the social presence of an object.”

In previous sections, we described the relevant design elements of socially interactive agents and discussed how they can affect the quality of user interaction with robot systems across different tasks in various social contexts. A remaining challenge is how interaction quality is defined and measured. Although prior research on embodiment captured a large number of dimensions of interaction quality, we classify measures that are used to capture these dimensions into two categories: behavioral (or observed) measures and subjective (or self-reported) measures [98]. Figure 4.2 illustrates these categories. Most studies use a combination of the two and specific measures for each reviewed experiment can be found in Table A.5). In the following subsections, we discuss the different measures used in the reviewed studies to provide an overview of how embodiment studies assess interaction quality.

Figure 4.2: Measures of human experience with socially interactive robots.

4.2.1 Self-Reported Metrics

Self-reported measures are metrics of interaction quality collected from study participants in the form of responses to structured, semi-structured, and open-ended survey instruments. These measures give researchers the ability to capture interaction quality as perceived by participants and are especially helpful in differentiating the various facets of user experience with the robot, such as the participant’s perceptions of the robot’s intelligence, how much trust was established between the user and the robot, and how enjoyable participants found the interaction to be. Example self-reported measures used in prior work include open-ended interviews [137, 283] and questionnaires designed to capture various dimensions of interaction quality, including social attraction [193], perceived intelligence [158], and story appreciation [58]. Table 4.1 provides a full list of the self-report measures used in the reviewed studies.

Instrument Measure Reference
User Acceptance of Information Acceptance [290]
Technology (UTAUT)
Positive and Negative Affective State [302]
Affect Schedule (PANAS)
Godspeed Questionnaire Anthropomorphism, [21]
Animacy, Likeability,
Perceived Intelligence,
Perceived Safety
Animated Character and Anxiety, [252]
Interface Evaluation Task Performance,
Liking
Negative Attitudes towards Attitude, [225]
Robots Scale (NARS) Perceived Presence
Questionnaire for Cognitive Development [88]
Placement Committees
NASA Task Load Cognitive Load [109]
Index Questionnaire
Cognitive Load Questionnaire Cognitive Load [281]
Self Assessment Manikin Emotional State [32]
and Semantic Differential
Hoonhout Enjoyability Enjoyability [123]
Scale
Adjective-Based Enjoyability N/A
Rating
Likert-Scale Evaluations General [175]
UCLA Loneliness Scale Loneliness [260]
Standardized Mini-Mental Mental State, [60]
State Examination Development
Kidd and Breazeal Questionnaire Perceived Presence [146]
Interactive Experiences Perceived Presence [178]
Questionnaire
Eysenck Personality Questionnaire Personality [87]
Big Five Questionnaire Personality [49]
“I’m Sorry Dave” Questionnaire Sociability [282]
Children’s Social Behavior Social Behaviors, [110]
Questionnaire (CSBQ) Empathetic Abilities
Networked Minds Questionnaire Social Presence [30]
of Social Presence
Table 4.1: Measures and instruments used to capture participant perceptions of robots in the reviewed studies.

A key consideration in the use of self-reported measures is the type of data to be collected from participants. Semi-structured and open-ended interview methods provide rich, qualitative data, while questionnaire-based survey instruments, structured using rating scales such as the Likert scale [1], provide quantitative measurements of specific variables. For guidelines on designing interview questions and questionnaire-based measures, see Louise Barriball and While [182] and Hinkin [120], respectively.

4.2.2 Observed Metrics

Observed measures capture user task-related actions, physical behaviors, and physiological responses that can be observed by human experimenters or measured using sensing instruments. These measures can be evaluated in real time or post hoc and can be viewed from three perspectives: (1) individual behavior, (2) interaction, and (3) task performance. The following paragraphs describe each perspective.

Individual behavior involves observed measures of a user’s behavioral, task, or physiological state over a period or at specific points in the interaction. Measures of user behavior include body motion [223, 52]

, body pose

[39], gaze behavior [217, 218], facial expressions [58], and linguistic verbosity [84]. Table 4.2 lists measures of individual behavior used in the reviewed studies, the methods with which they were labeled or analyzed, and studies that included these measures.

Measure Analysis Tool Example Experiment(s)
Attachment Level Automated System [37]
of Speech
Response Time Automated Systems, [15]
Human Annotation
Directed Gaze Human Annotation [71, 137, 146, 153, 305]
Facial Affect SHORE [58], [58, 267, 305, 79]
FACS Coding [78] Tools
Face Tracking FaceAPI, [58]
Microsoft Kinect,
OpenFace [2]
Micro Behaviors Human Annotation [71]
Linguistic Verbosity/ Human Annotation [84]
Breadth of Disclosure
Conversational Human Annotation [114]
Expressiveness
Body Pose/ Automated Systems, [136, 39]
Joint Positions Microsoft Kinect,
RGBD Cameras,
Vicon Motion Capture,
Human Annotation
Table 4.2: Measures of individual behavior used in the reviewed studies and methods for their capture and computation.

Interaction measures capture interactive phenomena that emerge through interaction among interaction partners. For example, while directed gaze toward an object of interest in the environment serves as a measure of individual behavior, mutual gaze emerges from two parties establishing and maintaining eye contact and serves as an interaction measure. Table 4.3 lists the interaction measures we observed in reviewed experiments along with the analysis or labeling methods used and some sample experiments in which these measures were used.

Measure Analysis Tool Example Experiment(s)
Directed Gaze Human Annotation, [176]
Movement Automated Systems
Mutual Gaze Human Annotation, [176, 256]
Automated Systems
Embodied Human Annotation [58, 305, 79]
Nonverbal Gestures
Eye Contact Human Annotation [88]
Interactivity Human Annotation [146]
Perceived Preference Human Annotation [154]
Engagement Human Annotation [245]
Self-Disclosure Human Annotation [245]
Attention Directing Human Annotation [181]
Behaviors
Advise-Seeking Human Annotation [234]
Behaviors
Social Touch Human Annotation [256]
Table 4.3: Interaction measures from observed measures used in reviewed studies.

Measurements of task performance capture the effectiveness of the user or the group in performing the primary task of the interaction, such as effective learning in an educational interaction [125] or speed of assembly by a human-robot manufacturing team [236]. Most applications of socially interactive robots aim to support at least one quantifiable, task-oriented measure focused on the tasks in the given interaction context. In the majority of the reviewed experiments, researchers were observing interactions with defined task goals, such as performance in games [88], negotiation [19], or imitation [256]. These goals include explicit metrics of performance that can be used as a grounded measure of user behavior. Because task performance is inherently a contextual measure that is commonly specific to individual experiments, these metrics are highly varied. Designing task performance measures for a given study is best informed by previous work (Appendix A.1) in similar task categories.

As many existing studies in socially interactive robots explore new domains, applications and interaction scenarios, the research literature still lacks established and validated self-reported or observed measures. Additionally, the majority of studies to date involve short-term interactions, and systems lack the ability to capture measurements over long periods. As speech recognition, language understanding, affect recognition, activity understanding, and other relevant technologies improve and systems become increasingly robust, automated methods for behavior measurement over long periods will become the norm.

4.3 Effects of Embodiment on Interaction Outcomes

Figure 4.3: Combined results for all reviewed studies.

The reviewed studies all seek to understand the effects of embodiment on human interaction with socially interactive robots in order to develop design guidelines for future computer and robot systems. In this section, we summarize the results of the reviewed studies with respect to this central research question.

Based on the observed and self-reported measures taken, the reviewed experiment results can be grouped into two types: differences in the perception of the agent and differences in task performance (Table A.3). By combining these two measures, all experiment results can be classified into five categories relative to the embodiment hypothesis: (1) solely positive (63.1%), (2) mixed positive (15.4%), (3) neutral (15.4%), (4) mixed negative (1.5%), and (5) solely negative (4.6%). Over all reviewed experiments, the results are strongly positive in support of physical embodiment, with 63.1% of combined results showing improvements in interaction and performance and 6.1% showing negative results (Figure 4.3).

The two measures, task performance and agent perception, are not fully separable, so analyzing both categories of measures provide a fuller and more nuanced understanding of interaction outcomes. For instance, Segura et al. [266] studied participants’ preferences when given the option to interact with a physically-embodied robot companion or with a virtual representation of that robot. They reported that, although participants found the robot “less annoying” and explicitly chose to interact with it more than the virtual agent, their ratings of the different embodiments did not reflect these observed functional preferences. The authors of that work deduced that choosing between an embodied or simulated agent was very task-specific. For tasks that involve a significant amount of information transmission but relatively little social rapport (e.g., information kiosks), and for tasks that require users to reveal personal information, disembodied agents should suffice. However, for tasks that are relationship-oriented (e.g., a home companion), social engagement is important for maintaining rapport, and physical embodiment is beneficial for increasing social presence, and in turn, engagement and rapport.

In the following subsections, the results of the reviewed studies are examined across the agent performance and perception categories, and analyzed relative to the current state of the embodiment hypothesis in socially interactive robots.

4.3.1 Differences in the Perception of Agent

The primary method for evaluating the social performance of artificial agents measures users’ perceptions of those agents, and changes in those perceptions as a result of interactions. The affordances gained by the design of a robot, the behaviors of that robot, and demonstrated competence are all key components of the resulting user perceptions. Researchers have aimed to study specific features such as attachment, comfort, loneliness, and general attitudes towards robots [21], in isolation from other factors such as novelty effects and prior task experience.

In our review, 57 of the 65 studies measured differences in the perception of the artificial agent using a variety of observational instruments. The majority of the experiments used a combination of observed and self-reported measures. The results reporting on agent perception are found in Figure 4.4 on the task circumplex. Each point represents a singular experiment, and its color represents the finding; green represents physical agents outperforming their non-physical counterparts; yellow represents a neutral result; and red represents virtual or non-embodied agents outperforming the physical agents.

Figure 4.4: Survey results for interaction performance differences between physically embodied and otherwise-embodied agents.

Of the 57 experiments, 43 (75.4%) showed that using a physically embodied agent is superior in improving user perceptions of the agent. Every task category had a majority of results favoring physical embodiment; four of eight task categories had positive or neutral results. Our review provides support for the embodiment hypothesis in the context of agent perception based on the current state findings in the field.

While 43 experiments had results supporting the embodiment hypothesis, 7 experiments presented negative results, and another 7 experiments presented neutral results. Most of the neutral results (shown in yellow in Figure 4.4

) have task and role classifications comparable to experiments with supportive results (shown in green), suggesting that these neutral results may stem from other facets of experimental design and may not indicate the impact of embodiment. Because of the subjective nature of measuring agent perceptions, we postulate that the lack of statistical significance in these experiments could be due to high data variance.

The two experiments with neutral results not completely surrounded by positive results are in the creative task category and are both the most “superior” agents used in their respective task categories. This pattern, along with the three negative results for the most “peer-like” agents in the same task category, is a strong indicator for the importance of social roles in certain types of tasks. We speculate that creative tasks may be a task category particularly sensitive to social roles, as the creative process involves socially complex interactions.

The seven negative results form clusters within their respective task categories. As Figure 4.4 shows the results of studies as a function of task type and social role, the clustering along the dimension of social roles within each type of task provides further evidence for the importance of appropriately designing social roles in the perception of artificial agents for different types of tasks.

Perception of the robot agent is affected by three factors: (1) the design of the robot (design metaphor and abstraction level), (2) the behaviors of the robot, and (3) the perceived social role of the robot. Depending on the type of interaction or task and the duration of the interaction, the relative importance of these factors can vary. The design of the robot scaffolds interactions by setting expectations about the robot, including signaling its physical and cognitive capabilities and its ability to follow norms of human social behavior. For example, a humanoid robot with a dynamic mouth is more likely to be expected to have conversational capabilities than a cat-like robot with a molded mouth. A robot with realistic-appearing arms is expected to be able to perform both gesture and manipulation tasks while a robot with stylized arms may only be expected to perform simple or high-level gestures.

People are primed by the perceived social role of the robot. For example, users may hear out a subordinate robot but not comply with it [298]. Failure by a superior agent in generative tasks, or tasks that involve collaboration between the robot and person to co-create ideas or narratives, can be interpreted as incompetence, while failure in negotiation tasks can be interpreted as potentially manipulative [48]. When suggestions from subordinate agents fail, people may be more likely to take the blame for the failure of the suggestion—it was not the agent’s incompetence that caused the failure, because the person, as the superior agent, should have known not to take that advice.

The perceived social roles and expected behaviors of robots are not static; through demonstration of their functional and social abilities, robots can show to their human interaction partners what they are capable of. The length of the interaction is particularly important factor that shapes the effects of robot design and behavior on agent perception. For example, in short-term interactions, the affordances gained by the “first impressions” of the robot, typically stemming from the design of its embodiment, play particularly important roles in user perceptions of that robot. In longer interactions, users are given more time to observe the behavior and demonstrated capabilities of the robot and can adjust their first impressions accordingly. For instance, if a robot is perceived to have manipulation capabilities based on its embodiment, and it fails at manipulating objects that are too heavy or too large, the perception of the robot’ physical capabilities will be impacted negatively, and the expectations of the robot will become more realistic. As people’s expectations are calibrated by the robot’s demonstrated behavior, the affordances and impressions first gained from the embodiment become less important [266].

The robot’s task is especially important in the context of perceived social roles. In the reviewed experiments, there were some discrepancies between intended and perceived social roles and, in some cases, the social roles were themselves the experimental variables [79]. The tasks provided contexts in which social roles could be evaluated.

Overall, the experiments we reviewed provide strong support for the value of physical embodiment in perceived social competence, measured by people’s perceptions of the artificial agents. Furthermore, our interpretations of the few negative results highlight the need for proper embodiment design. By measuring human perceptions of artificial agents, researchers aim to study their social capabilities that serve as fundamental context for task performance, as discussed in the next section.

4.3.2 Differences in Task Performance

Agent-agent interactions typically aim to accomplish a set of shared goals. Those goals can be abstract, such as “have a discussion” [245], or explicit, such as “move all books from current locations to goal locations” [15]. Within the shared goals, each individual has their own goals that may represent conflict (bottom half of the task circumplex) or cooperation (top half of the task circumplex). Compared to measurement tools used for agent perception and social performance, metrics of task performance involve observation-based, objective measures, such as measuring response time [15], the number of moves in a puzzle [121], or the compliance rate [153]. Pairing task performance and agent-perception measures can provide a more complete understanding of interaction outcomes. Because socially interactive robots are usually designed to accomplish or support a task, even if it is as general as engaging a user for a set amount of time, task performance measures can be seen as the “end result” of the robot’s performance.

Of the 65 studies we reviewed, 57 used defined metrics for task performance; a majority showed significant improvements in user performance when collaborating with physically embodied robots. These task performance results, plotted over the task categories and social roles, can be seen in Figure 4.5. Of the 57 experiments, 37 presented positive results (71.15%) for having a physically embodied robot over virtual or disembodied agent, leading to the conclusion that physical embodiment is beneficial for improving the social performance of artificial agents as well as task performance of human users interacting with those agents. For instance, Jung and Lee [139] found that, when interacting with the eMuu robot, participants scored higher on the negotiation task than when they were interacting with a virtual rendering of the robot. Bainbridge et al. [15] presented results for a book-moving task in which the artificial agents requested unusual behaviors such as throwing books into the trash; the results showed higher compliance rate when requests came from a physical robot than when they came from a virtual agent. In learning tasks, Jost et al. [134] also showed children to be significantly more motivated by a physical robot when playing a cognitive-simulation game.

Figure 4.5: Survey results for task performance differences between physically embodied and otherwise-embodied agents.

Although the majority of the reviewed experiments demonstrated task performance improvements, 3 did not, and 12 had neutral results. In comparing non-positive results in agent perception and task performance, we note that their overall total numbers are similar (14 and 15, respectively), but there are almost twice as many neutral results in task-performance measures than in agent-perception measures. The neutral results fall into four of the eight task categories and are clustered by social role within those categories. For instance, in performance and actions, experiments show improved task performance when the embodied robot is playing either a superior or subordinate role, while all studies with neutral artificial agents have neutral results. Fasola and Mataric [81] used a humanoid robot, Bandit, to lead “chair aerobics” exercises with older adults in the US, and Nomura and Sasa [224] used a robot to give directions to adults in Japan for sorting objects into boxes. The robots played similar roles and, due to the nature of these two tasks (i.e., “generation” and “execution”), the peer-like role of the robots may not have inspired as much confidence in the robots’ intellectual contributions as superior robots would, nor did it cause users to feel the need to support the robot as subordinate agents would.

The three negative results are all in different task categories and, within those categories, are located near the largest cluster of neutral results. This clustering may be an indicator of social roles that are less effective than others within a given type of task, making designing a robot for that application in that role more difficult.

Overall, the results of the reviewed experiments provide strong support for physical embodiment in task performance with socially interactive robots. Compared to results for agent perception, the results for task performance had fewer negative results and more neutral results. We speculate that the effects of embodiment on user task performance, which can be independent of the robot, are not large enough to reveal a statistically discernible difference, while its effects on agent perceptions, which involve evaluations directed toward the robot, may be stronger. This interpretation presents future challenges for embodiment design, e.g., designing more salient embodiments, and highlights the nuanced and complex effects of embodiment on human interaction with socially interactive robots. The next section draws on the insights from our findings and presents design recommendations and considerations for future studies.

5.1 Research Paradigms

Research studies of embodiment in socially interactive robots to date have primarily involved controlled laboratory studies. As socially interactive robots become more pervasive, future studies will need to consider methodological fit [77] and draw on a richer set of choices under both laboratory and in situ studies.

Laboratory studies allow for a higher level of control over the variables in the phenomena being studied, at the cost of ecological validity, the extent to which findings in the laboratory can be generalized to real-world situations. In situ studies identify representative settings, i.e., the “field,” of the target environment for the design of the system, introduce the system to these settings, and enable the study of human interaction with the system using comparative or naturalistic research paradigms. For example, an in situ study of an educational robot designed to improve student attention to instruction may be conducted in a real-world classroom and seek to confirm that the attentional benefits of the design can be obtained in the complex and dynamic setting of a classroom.

In situ studies are carried out in the natural setting in which a system is deployed or the target setting for which a system is designed. Naturalistic studies involve no experimental control and follow ethnographic and other field methods to capture the natural and emergent ways in which humans interact with robots. For example, Mutlu and Forlizzi [215] conducted a study of how workers at a hospital interacted with a delivery robot, utilizing ethnographic observations and interviews to capture behavior as well as subjective perceptions of the robot.

Comparative in situ studies, or “field experiments,” in contrast, involve introducing robot into a target setting, manipulating aspects of the robot’s design, and using qualitative and quantitative methods to understand how these manipulations affect human interaction with the system. For example, a field study conducted by Hayashi et al. [112] introduced two socially interactive robots into a train station, varied how active and social the robots acted, and studied commuters’ interactions with and perceptions of the robots. Such studies involve some control of the system’s behavior or the environment (e.g., where or how the system is introduced to the setting) while allowing all other variables to vary naturally.

5.2 Study Designs

The design of a study is determined by how much control is desired, whether independent variables can be manipulated, and how many variables are considered. Studies in social human-robot interactions involve four key study designs: true experiments, quasi experiments, system-level evaluations, and naturalistic studies.

Both quasi and true experiments seek to establish causal relationships between design variables and outcomes of interactions with robots, although they differ in whether or not experimental conditions are randomly assigned to the population of interest and thus in the conclusiveness of the causal relationships identified by the study. Additionally, true experiments are most commonly used in laboratory studies, while quasi-experimental designs are most common in in situ studies.

In true experiments, participants sampled from the population are randomly assigned to study conditions that correspond to different levels of an independent variable. For example, a study on the effects of physical proximity between the robot and its user may establish “close” and “far” distances at which the robot will interact with its user, randomly assign members of its study population to these levels, and use inferential statistics to determine whether the amount of distance had a significant effect on participant behaviors or perceptions of the robot.

Quasi experiments, on the other hand, are used in situations where random assignment is not possible, and studies compare matched groups or make pre-/post- comparisons. For example, a study that compares the use of a robot across two senior living facilities or a study that compares social interaction among members of a facility before and after the introduction of a robot. While quasi-experimental designs can allow the exploration of settings or interactions that are otherwise impossible to study and can offer valuable insights, their findings are less conclusive than those obtained in true experiments.

Both true experimental and quasi-experimental study designs have inherent limitations when used in the context of research into socially interactive robotics. First, they offer insight into relationships between a small number of design variables in isolation and lack the ability to conveniently study large design spaces that robotic systems involve. Second, they usually show that manipulations of variables significantly affect interaction outcomes but do not provide an understanding of the extent of these effects, limiting the ability to make fine-grained design decisions to meet the specific demands of a robot product. System-level study designs seek to address these limitations by simultaneously modeling the predictive relationships between a large number of design variables and interaction outcomes. Research in socially interactive robotics has utilized two variations of this approach. The first variation involves asking users to interact with a socially interactive robot in the way it is intended and ad hoc modeling of predictive relationships between design variables and interaction outcomes. For example, Peltason et al. [238] asked participants to perform an object-learning task with a robot and modeled the predictive relationships between design variables such as how many utterances the robot spoke per minute, as they are naturally utilized in the interaction, and interaction outcomes such as perceived ease of use of the robot using multivariate regression techniques. The second approach utilizes the same statistical modeling tools, but instead of modeling variable-outcome relationships, it explicitly manipulated multiple design variables simultaneously within their possible ranges. Huang and Mutlu [126]

demonstrated the use of this method in a study on the design space of arm gestures for a socially interactive robot; they manipulated the frequency of each type of arm gesture and modeled how well the use of each arm gesture predicted interaction outcomes. They found, for example, that the use of pointing gestures by the robot significantly predicted information recall in participants. Furthermore, they found that each standard deviation increase in the use of this type of gesture increased information recall by 0.123 and 0.623 standard deviations for females and males, respectively. This example illustrates the power of system-level study designs in gaining a fine-grained understanding of variable-outcome relationships, which can enable fine-grained decisions in the design of the robot system.

Finally, research into embodiment in socially interactive robotics also benefits from naturalistic studies that involve minimal levels of control. These studies utilize methods from ethnography, the systematic study of people, settings, organizations, and cultures based on observational data, and other forms of qualitative empirical research, including fly-on-the-wall observations, participant observation, interviews, and system-log data. Data obtained using these methods are analyzed rigorously using qualitative methods. Such studies seek to utilize the rich data obtained from the setting coupled with rigorous analysis to arrive at a deeper understanding of the use of the robot system in the study setting.

5.3 Independent Variables

A common characteristic of all studies of embodiment in socially interactive robotics is inquiry into the effects of system-level properties of embodiment on the interaction, including the high- and low-level variables that make up the design of the robot. In lab and in situ studies that involve experimental control determine these properties and variables a priori. Naturalistic studies, on the other hand, explore those effects in an unstructured fashion, although they can also seek to describe studied phenomena without drawing any conclusions about them. In experimental design, those properties are called independent variables and refer to factors in the study that are explicitly manipulated, such as the height of a robot, or measured, such as the age of a participant. The interaction outcomes, such as how approachable participants find the robot, are referred to as dependent variables. The goal of the study is to understand how the independent variables affect the dependent ones.

Embodiment studies consider system-level independent variables that include whether the system is physically embodied, virtually embodied, or disembodied (such as a speech-based interface). Other independent variables include high-level properties of the design of the robot system, such as the metaphor followed in the design of the system. For instance, Hinds et al. [119] compared robots designed to follow human and machine metaphors to understand the effect of human-likeness of the robot on the attributions that human collaborators made to the robot. Finally, independent variables also include low-level design variables, such as the distance a robot maintains between itself and its user, the amount of eye-contact the robot establishes with its user, and the overall height of the robot system. These low-level properties can be variables that vary on a continuous scale, such as height, distance, or frequency, or among a discrete set of options, such as the color of the robot’s lips, as manipulated by Powers and Kiesler [244] and found to affect participant perceptions of the robot.

5.4 Measurements

Understanding variable-outcome relationships in studies of embodiment requires the appropriate definition of dependent variables that are expected to be affected by independent variables of interest and the appropriate measurement of those variables. These measurements, as we previously discussed, can be categorized into observed and self-reported.

Observed variables include physiological reactions, behaviors, interactions, and task actions that can be reliably recognized, described, and quantified by third-party human coders, sensors, and recognition algorithms. Specifically, observed task actions of participants can be translated into standardized task-performance measures, also called “objective” measurements, that can be used in quantitative analyses. For example, in a task in which participants collaborate with a socially interactive robot to sort toy blocks, the number of blocks sorted by the participant or the team within a period of time can be calculated from observed task actions and can serve as a measurement of task performance. Similarly, observed participant behaviors, such as gaze shifts, gestures, and speech, can be coded into behavioral variables that can serve as indicators of high-level cognitive processes. For example, the targets and timings of the gaze shifts of participants can be translated into measurements of gaze fixation toward particular types of targets, which can signal the amount of attention paid toward these targets. Measurements from observed behaviors can be automatically extracted using technology, including sensors and recognition algorithms, such as eye-tracking technology that can automatically translate gaze behaviors into gaze-fixation measurements. The use of overt physiological reactions such as body temperature as measurements, however, require the use of technology. Finally, the interdependent behaviors of multiple agents, humans and/or robots, as they unfold over time can be translated into measurements that indicate the fluency of the interaction. Examples of interaction measurements extracted from observed behaviors include rate of turn taking in conversation, the amount of mutual gaze between a robot and its user, and the mutual physical distance maintained between parties in an interaction.

While physiological responses, behaviors, and task actions can be observed and then reliably translated into quantitative measures, participant attitudes, perceptions, and experiences are only accessible through the use of self-report measurements. Common methods to obtain self-report measures include the use of validated survey instruments such as multi-item questionnaires and translating transcriptions of interview data into quantitative metrics. The development of validated survey instruments that are appropriate for socially interactive robotics research is still in its infancy, and thus research to date has adapted validated scales from other fields and domains, including social psychology, interpersonal communication, and human factors, to study user attitudes toward and perceptions and experiences with socially interaction robots or gauge the cognitive, affective, and attentional states of participants. For example, the NASA Task Load Index (TLX) [109] is commonly used to measure user task load when interacting or collaborating with a robot. Researchers have also coded interview transcripts to extract subjective measures such as the frequency of the use of words with positive or negative valance.

Measurements in naturalistic studies primarily use qualitative data; other study designs can supplement quantitative measurements with qualitative data. Qualitative data most commonly take the form of rich narrative descriptions of studied settings and transcriptions of interviews conducted with study participants. While technology such as audio- and video-recording can be used to conveniently capture observations and interviews, commonly used methods for qualitative data analysis, such as content analysis [157] and Grounded Theory [280], require textual data.

The choice of measurement can be guided by specific design goals or hypotheses. For example, a study investigating the extent to which an instructional robot improves student learning may use observed task-performance measures that indicate learning effects, such as recall of instructional material or ability to correctly apply it to a given problem. However, the scenarios and settings are usually complex, requiring researchers to use triangulation—the use of two or more measures, methods, or approaches to assess a single relationship [258]—in order to improve the validity, confidence, and conclusiveness of findings. For example, the study on the learning benefits of the instructional robot may find significant learning effects, although this benefit may come at the expense of positive student experience. Triangulation by simultaneously measuring cognitive and affective learning would enable the researcher to gain a more comprehensive understanding of interaction outcomes and reveal potential tradeoffs.

5.5 Hypotheses

Rigorous application of many of the research paradigms described above require the development and testing of a set of hypotheses on the relationships between design variables and interaction outcomes. Naturalistic studies are generally incompatible with hypothesis-testing. Controlled laboratory or in situ studies, on the other hand, involve a priori consideration of independent and dependent variables and thus provide necessary elements to construct testable hypotheses.

A key consideration in hypothesis development is the basis of the prediction made by the researcher. In the context of socially interactive robotics research, hypotheses are constructed by drawing from three key sources: prior research, pilot data, and design goals. Prior research may suggest understudied but plausible variable-outcome relationships or offer preliminary findings that require more conclusive evidence. Such preliminary evidence can also be obtained from pilot studies. In socially interactive robotics, design goals can also inform the development of hypotheses, as the justifications for the design choices for a system can provide sufficient bases for an expected outcome.

5.6 Limitations and Open Issues

Current paradigms and practices in studies of human interaction with socially interactive robots have a number of limitations for consideration by future theoretical and methodological advancements. A fundamental limitation is the integrated nature of robotic systems that only allows isolating specific design variables in the context of the holistic design of the system. This introduces two problems. First, and most fundamentally, robotics is challenging because of a large number of factors including uncertainties and limitations of perception and action and difficulties of repeatable behavior, as well as the challenges of robust behavior and finally the costs associated with sufficient hardware for large studies. Next, findings from studies that isolate low-level features of a robot system may not be generalizable, because these manipulations may not be representative of the abstract design element. For example, studies of gaze behavior often interchange eye gaze, head orientation, and their combined behavior based on the level of fidelity in which gaze mechanisms are designed in a robot system or the stylized representation of gaze chosen for the design. However, whether or not findings obtained from a study that manipulates head orientation to understand the effects of gaze on user attention would generalize to other forms of gaze is unknown. Second, system-level studies of embodiment often involve comparisons across ontologically different systems that may afford different design variables, and comparisons at the design-variable level may not be feasible. For example, a study on the effects of touch cannot compare a physical and virtual embodiment, as the latter does not afford physical touch, and techniques to simulate touch may not effectively represent natural human experience.

Another limitation that is common in studies reviewed here is the relatively small sample sizes employed and the underpowered findings that may be obtained. Several reasons underlie this limitation. First, the nascent state of robotics in general and socially interactive robotics in particular limits the ability to utilize some of the well established practices of empirical research, such as power analysis due to a lack of prior research that would aid in estimating expected effect sizes. Second, conducting studies with complex, potentially unreliable, and often prototype systems imposes a high cost on conducting large numbers of trials. Third, the complex, interdependent, and often fluid (due to technological advancements) design spaces of these systems require a large number of system-level and variable-level studies and motivate the use of small, rapid, and iterative experimentation. Finally, the domain-driven nature of the development of robot systems often requires sampling from special populations, such as individuals with social deficits due to developmental disorders or trauma, that may show high variability in behavior or characteristics, may have clinical demands such as the presence of a therapist, or may be difficult to recruit. While single-subject studies and qualitative studies are appropriate to study robot systems with these populations, these research paradigms are not widely adopted by the research community.

Studies that seek an understanding of human attitudes toward, perceptions of, and experience with robot systems still lack appropriate and reliable survey instruments for measurement. While some scales have been developed by the community, the validity and reliability of these instruments have not been established. Adaptations of instructions from other fields, including social psychology, interpersonal communication, and human factors, do not always result in appropriate or valid measures. For example, a two-item scale of “mutual liking” developed for studying dyads that asks the members of the dyad how much they liked their partner and think that their partner likes them and that reliably provides highly correlated results may not be appropriate in the context of studying human-robot interaction due to the ontologically asymmetrical nature of the interaction.

Finally, the use of qualitative research paradigms, methods, and measures is still rare in research in socially interactive robotics despite their potential for deeper understanding of human interaction and experience with robot systems, particularly in naturalistic settings and when quantitative methods are inappropriate. The human-computer interaction (HCI) research community has adopted and uses a wide range of qualitative paradigms and methods with success and can serve as a model for research in socially interactive robotics. Studies that are naturalistic in their entirety or those that utilize qualitative data for triangulation are essential for exploring the effects of robot systems that are integrated in human environments.

6.1 Selecting Social Roles

We have discussed socially interactive robots and their applications in the context of (1) tasks, (2) social roles, and (3) embodiment. Following Figure 6.1, we first consider the robot’s task and, based on that task, select a social role for the robot. Properly selecting the role is critical because it is closely tied to a robot’s ability and approach to achieving its goals; for instance, a superior robot may be a more effective teacher or coach based on heightened perceived authority [15]; a peer-like robot may be more engaging for a competitive task [135]; and a subordinate robot may help in improving self-efficacy [84] or encouraging empathy [79]. While selecting the and roles is still a process that requires intuition, data from the ever-growing body of past work can be used to inform the process.

Based on the reviewed studies, we can advise decisions on the social role that the robot may most effectively play in the context of that task. As the social role of a robot is a design parameter assigned by the designer, the distribution of social roles used across different task categories are a representation of the general intuition of researchers, resulting in an uneven distribution of experiments across task categories as seen in Figure 6.1. The performance of the robots playing these roles can then be used to predict potential performance of other robots for similar tasks.

Figure 6.2: Visualization of artificial agents used for different task categories in the reviewed studies. Performance is combined performance as defined in Section 4.3.

6.2 Designing Robot Embodiment

After selecting a social role to be implemented for a socially interactive robot, the designer must develop two components: the robot’s embodiment and its behaviors. Both are critical for successfully implementing a robot’s desired social role; in the context of this paper, we focus on the embodiment.

Using our previously-discussed representation of robot embodiment consisting of design metaphor and level of abstraction, we can use existing studies to advise the design or selection of robots to be used in future work [69]. In our analysis of the 65 studies, we classified robot systems with the design metaphor and a level of abstraction (a numerical value of 1 to 10, smaller mapping to more abstract), seen in Table A.1. Figure 6.3 shows the implemented social roles of robots in the reviewed studies, for each design metaphor and by levels of abstraction.

Because of the differing mental models of the various designmetaphors, the same social role can be implemented with different design metaphors by changing the levels of abstraction for each design metaphor. For example, if a superior social role is desired, we can reference Figure 6.3 to find that implementing a metaphoric human form, a slightly literal bird form, or a literal car form may all effectively achieve the specific goal. Because of this flexibility, if a robot designer is constrained by either the design metaphor or the level of abstraction, they can reference existing data to advise the selection of the unconstrained design dimension and more effectively explore and iterate through the space.

Figure 6.3: Visualization of the level of abstraction and social role by design metaphor for the systems in the reviewed studies.
Design Level of
Robot Metaphor Abstraction Studies Used
Keio U Robotphone Bear 4 [174]
Pioneer 2DX Car 8 [71, 296, 297]
Pioneer P3AT Car 8 [266]
iCat Cat 6 [20, 114, 169, 180]
[267, 239, 172]
Keepon Chick 2 [172]
Sony Aibo Dog 4 [139, 165]
Aesop Robot Human 8 [58]
Aldebaran NAO Human 6 [88, 37, 136, 144]
[158, 174, 181]
Bandit Human 5 [81, 284]
Darwin-OP Human 6 [39]
eMuu Human 1 [19]
Honda ASIMO Human 4 [283]
iCub Human 7 [176, 84]
KASPAR Human 4 [154]
Kondo Kagaku Human 3 [111]
MIT Robot Head Human 5 [146]
MIT AIDA Human 2 [305]
Nico Robot Human 3 [15]
Nursebot Human 4 [147, 245]
PaPeRo Human 2 [153]
Robata Human 6 [256]
Robothespian Human 7 [234]
Robotis Bioloid Human 5 [135]
Robovie-X Human 7 [224]
Samsung April Human 6 [139, 165]
Stanford Kiosk Robot Kiosk 10 [137]
Robulab Penguin 7 [308]
Nabaztag Rabbit 3 [121, 314]
NTT Lab Robot Rabbit 5 [272, 271]
Table 6.1: Robots used in reviewed studies labeled with the design metaphor and level of abstraction that they were assigned.

6.2.1 End-to-End Design of Socially Interactive Robots

Using our characterization of the design space for socially interactive robots and the meta-results of the reviewed studies, we believe that the approach we present above can be effective in advising the design of new robots or selection of existing platforms for desired applications. The reviewed studies demonstrated that, for a given task, there are likely multiple social roles that robots can take. Similarly, the reviewed experiments demonstrated that the same social roles can be effectively implemented with different combinations of design metaphors and levels of abstraction.

Because “optimality” of socially interactive robot design is not a precise, quantitative process, the discussed approach can be applied to design or select not only robot embodiments for singular tasks but also for sets of tasks: first finding interaction strategies that have been shown to be effective for an individual tasks and then mapping those strategies to social roles and selecting the social role that is most effective for most (if not all) of the desired tasks [141, 69]. That social role can then be implemented and evaluated with a variety of design metaphors and levels of abstraction within the set of selected tasks.

Figure 6.4: A characterization of the process for designing or selecting socially interactive robots for multiple tasks.

6.2.2 Robot Embodiment Design in Practice

Our formalization of the design and task spaces for socially interactive robots allows us to discuss and explore how robots are designed and used. We proposed a design process that defines the order in which design features should be decided and iterated on. Finally, using data from past research studies, we proposed two ways of visualizing and leveraging experimental data to drive future design decisions–specifically considering (1) the relationship between social roles of artificial agents used for different types of tasks and (2) the mapping of differing levels of abstraction to social roles within each design metaphor. To demonstrate how these steps come together in the design of new robots or the selection of an existing robot for a new task, we provide an example. While in the examples we do not iterate on any of the design decisions, as that requires evaluation and testing, in Figure 6.1 we show where such iteration would take place on different design dimensions.

Example: Grocer Store Robot

Consider the example problem of needing to design a robot that helps people decide what to purchase in a grocery store. The robot needs to help people weigh the benefits of different food options against costs of those items. There is no objectively “best” combination of foods because features of foods can be more or less important to different people and some people may be more sensitive to price than others. Given that problem statement, this problem falls under the decision-making task category.

Figure 6.5: A visualization of the results from experiments with decision-making tasks plotted over the social role taken by the artificial agents and the combined performance within those experiments.

Taking into account the results from the surveyed experiments, we aim to make an educated guess as to the social role(s) to explore for the grocery store robot. Figure 6.5 shows the distribution of the decision-making task experiments plotted over performance and social role (with density of experiments shown by the overlaid gradient). The experimental results show, with relatively high confidence, that an agent with a social role somewhere between peer and superior seems to be the best choice. Given that, we then proceed to implement that social role into a robot embodiment, per Figure 6.1.

Figure 6.6: A visualization of the results from experiments with decision-making tasks plotted over the social role taken by the artificial agents and the combined performance within those experiments.

Because a social role can be implemented with different combinations of design metaphor and level of abstraction, we consider past effective design choices. The gradient in Figure 6.3 shows that birds, humans, cats and cars have been most frequently used for the peer-superior role. For this thought experiment, we arbitrarily select the bird design metaphor to start with although in real applications it is best to iterate on design metaphors and select one based on other external constraints such as physical requirements for the robot.

The bird metaphor, like all other design metaphors, can be used to implement any social role with varying levels of efficacy. Based on reviewed robots and their usage, we visualize how the bird metaphor has been previously used (Figure 6.6) and find that previous experiments have used a literal instantiation of the bird design metaphor to implement a peer-superior role. Thus we select an existing robot fitting that metaphor and abstraction combination (from a database such as Table 6.1) or design a new robot to fit that combination. The behaviors of the robot should then map to the appropriate levels of abstraction by changing how much it differs from the behaviors of the organic version of the metaphor.

Because the current dataset of embodiment-related experiments in the field of socially interactive robots is limited, we use all the data from studies covered in the review. The design goal can be more specific than a task–there may also be target user populations (e.g., children or elderly adults) or target demographic populations (e.g., different countries or cultures). Filtering results by those additional qualifiers (Table A.4) allows for further tailoring the data to specific problem and context.

The characterization of the design space for socially interactive robots introduced in this work aims to facilitate a concrete discussion of past work, inform the selection and development of new robots for different applications, advise future experimental design, inspire novel applications, and help to improve the iterative design process of future robots, as the field of robotics continues to expand into new avenues of use.

Appendix A Reviewed Studies

Author (Year) Task Category Social Role Reference
Bainbridge et al. (2011), Exp. 1 Performances/Actions Superior/Peer (9) [15]
Bainbridge et al. (2011), Exp. 2 Performances/Actions Superior/Peer (9) [15]
Bainbridge et al. (2011), Exp. 3 Performances/Actions Superior/Peer (9) [15]
Bartneck (2003) Contests/Competition Peer (5) [19]
Bartneck et al. (2004) Intellective Subordinate/Peer (3) [20]
Bremner and Leonards (2015) Decision-Making Subordinate/Peer (3) [37]
Brooks et al. (2012) Performances/Actions Superior/Peer (8) [39]
Costa (2016), Exp. 1 Creative Superior/Peer (8) [58]
Costa (2016), Exp. 2 Creative Superior/Peer (8) [58]
Donahue and Scheutz (2015) Performances/Actions Subordinate (1) [71]
Fasola and Mataric (2013) Performances/Actions Superior/Peer (8) [81]
Fischer et al. (2012), Exp. 1 Intellective Subordinate/Peer (2) [84]
Fischer et al. (2012), Exp. 2 Intellective Subordinate/Peer (2) [84]
Fischer et al. (2012), Exp. 3 Intellective Subordinate/Peer (2) [84]
Fridin and Belokopytov (2014) Intellective Superior/Peer (7) [88]
Hasegawa et al. (2010) Intellective Superior/Peer (8) [111]
Heerink et al. (2010) Performances/Actions Peer/Subordinate (2) [114]
Hoffmann and Krämer (2013), Exp. 1 Creative Peer (5) [121]
Hoffmann and Krämer (2013), Exp. 2 Intellective Peer/Superior (6) [121]
Jost et al. (2014) Intellective Peer/Superior (6) [136]
Jost et al. (2012), Exp. 1 Contests/Competition Peer (5) [134]
Jost et al. (2012), Exp. 2 Contests/Competition Peer (5) [134]
Ju and Sirkin (2010), Exp. 1 Performances/Actions Subordinate (1) [137]
Ju and Sirkin (2010), Exp. 2 Performances/Actions Subordinate (1) [137]
Jung and Lee (2004), Exp. 1 Creative Peer/Subordinate (2) [139]
Jung and Lee (2004), Exp. 2 Creative Peer/Subordinate (2) [139]
Table A.1: Reviewed studies labeled with the task category and social role of artificial agents used. Papers with multiple experiments are labeled with Exp. 1, 2, etc. and social role labeled with numeric scale of 1 (subordinate) to 9 (superior).
Author (Year) Task Category Social Role Reference
Kennedy et al. (2015) Intellective Superior/Peer (8) [144]
Kidd and Breazeal (2004), Exp. 1 Performances/Actions Superior/Peer (8) [146]
Kidd and Breazeal (2004), Exp. 2 Creative Superior/Peer (7) [146]
Kiesler et al. (2008) Decision Making Peer (5) [147]
Komatsu et al. (2010), Exp. 1 Intellective Peer/Subordinate (3) [153]
Komatsu et al. (2010), Exp. 2 Intellective Peer/Subordinate (3) [153]
Kose et al. (2009) Performances/Actions Peer (5) [154]
Krogsager et al. (2014) Creative Peer (5) [158]
Lee et al. (2006), Exp. 1 Creative Peer (5) [165]
Lee et al. (2006), Exp. 2 Creative Peer (5) [165]
Lee et al. (2015) Decision Making Peer/Subordinate (4) [162]
Leite et al. (2008) Cognitive Conflict Peer (5) [169]
Levy-Tzedek et al. (2017) Performances/Actions Superior/Peer (7) [171]
Leyzberg et al. (2012) Contests/Competition Peer/Superior (7) [172]
Li and Chignell (2011) Decision Making Subordinate (1) [173]
Ligthart and Truong (2015) Cognitive Conflict Peer/Superior (6) [174]
Lohan et al. (2010) Intellective Peer/Subordinate (4) [176]
Looije et al. (2010) Cognitive Conflict Peer/Superior (7) [180]
Looije et al. (2012) Contests/Competition Peer/Superior (7) [181]
Nomura (2009) Performances/Actions Superior/Peer (8) [224]
Pan and Steed (2016) Cognitive Conflict Superior/Peer (8) [234]
Pereira et al. (2008) Cognitive Conflict Peer (5) [239]
Powers et al. (2007) Mixed Motive Peer/Superior (7) [245]
Robins et al. (2006) Mixed Motive Peer/Subordinate (4) [256]
Segura et al. (2012) Performances/Actions Superior/Peer (8) [266]
Shahid et al. (2014) Cognitive Conflict Peer (5) [267]
Shinozawa and Reeves (2003), Exp. 1 Mixed Motive Peer/Subordinate (4) [272]
Shinozawa and Reeves (2003), Exp. 2 Planning Superior/Peer (6) [272]
Shinozawa and Reeves (2003), Exp. 3 Intellective Peer (5) [272]
Shinozawa et al. (2007) Decision Making Peer/Superior (7) [271]
Short et al. (2017) Creative Subordinate/Peer (4) [79]
Takeuchi et al. (2006) Intellective Superior/Peer (8) [283]
Tapus et al. (2009) Mixed Motive Peer/Superior (7) [284]
Vossen et al. (2009) Mixed Motive Peer/Superior (7) [284]
Wainer et al. (2006) Intellective Superior (9) [296]
Wainer et al. (2007) Intellective Superior (9) [297]
Williams et al. (2013) Planning Superior (9) [305]
Wrobel et al. (2013) Contests/Competition Peer (5) [308]
Zlotowski (2010) Intellective Peer/Superior (7) [314]
Table A.1: Continued
Author (Year) Physical Agent Virtual Agent
Bainbridge et al. (2011), Exp. 1 Nico Nico
Bainbridge et al. (2011), Exp. 2 Nico Live Video of Nico
Bainbridge et al. (2011), Exp. 3 Nico Live Video of Nico
Bartneck (2003) eMuu Virtual eMuu
Bartneck et al. (2004) iCat Virtual iCat
Bremner and Leonard (2015) NAO Live Video of Human
Brooks et al, (2012) Darwin-OP Manoi Animation
Costa (2014), Exp. 1 Aesop Robot Greta Animation
Costa (2014), Exp. 2 Aesop Robot Greta Animation
Donahue and Scheutz (2015) Pioneer 2DX Virtual Pioneer 2DX
Fasola & Mataric (2013) Bandit Virtual Bandit
Fischer et al. (2012), Exp. 1 iCub II Akachan
Fischer et al. (2012), Exp. 2 iCub II Akachan
Fischer et al. (2012), Exp. 3 iCub II Akachan
Fridin and Belokopytov (2014) NAO Virtual NAO
Hasegawa et al. (2010) Kondo Kagaku KHR2-HV NUMACK
Heerink et al. (2009) iCat IIE Annie
Hoffmann & Krämer (2013), Exp. 1 Nabaztag Virtual Nabaztag
Hoffmann & Krämer (2013), Exp. 2 Nabaztag Virtual Nabaztag
Jost el al. (2014) NAO
Jost et al. (2012), Exp. 1 Robotis Bioloid Telecom GRETA
Jost et al. (2012), Exp. 2 Robotis Bioloid Telecom GRETA
Ju and Sirkin (2010), Exp. 1 Kiosk Robot with Arm Kiosk Robot with Projected Arm
Ju and Sirkin (2010), Exp. 2 Kiosk Robot with Arm Kiosk Arm with on-screen Arm
Jung and Lee (2004), Exp. 1 Sony Aibo Virtual Sony Aibo
Jung and Lee (2004), Exp. 2 Samsung April Virtual Samsung April
Kennedy et al. (2015) NAO Virtual NAO
Kidd & Breazeal (2004), Exp. 1 Robot Eyes Virtual Eyes
Kidd & Breazeal (2004), Exp. 2 MIT Robot Head Virtual MIT Robot Head
Kiesler et al. (2008) Nursebot Virtual Nursebot
Komatsu et al. (2010), Exp. 1 PaPeRo RobotStudio
Komatsu et al. (2010), Exp. 2 PaPeRo RobotStudio
Kose et al. (2009) KASPAR Virtual KASPAR
Krogsager et al. (2014) Aldebaran NAO Telepresent NAO
Lee et al. (2006), Exp. 1 Sony Aibo Virtual Aibo
Lee et al. (2006), Exp. 2 April Virtual April
Lee et al. (2015) Humanoid Robot Virtual Humanoid Robot
Leite et al. (2008) iCat Virtual iCat
Levy-Tzedek et al. (2017) Kinova Arm Video of Kinova Arm
Leyzberg et al. (2012) Keepon Video of Keepon
Li and Chignell (2011) Keio U Robotphone Virtual Keio U Robotphone
Ligthart and Truong (2015) NAO Virtual NAO
Lohan et al. (2010) iCub Akachan
Looije et al. (2012) NAO Virtual NAO
Looije, Neerincx, & Cnossen (2010) iCat Virtual iCat
Nomura (2009) Robovie-X Virtual Robovie-X
Pan and Steed (2016) Robothespian Virtual Human Character
Pereira et al. (2008) iCat robot Virtual iCat
Powers et al. (2007) Nursebot Projected Virtual Agent
Robins et al. (2006) Robata Passive Robata Doll
Segura et al. (2012) Pioneer P3AT Virtual P3AT Head
Table A.2: The physical robot platforms and their virtual counterparts used in the reviewed studies.
Author (Year) Physical Agent Virtual Agent
Shahid et al. (2014) iCat Human
Shinozawa and Reeves (2002), Exp. 1 NTT Lab Robot Video of Lab Robot
Shinozawa and Reeves (2002), Exp. 2 NTT Lab Robot Video of Lab Robot
Shinozawa and Reeves (2002), Exp. 3 NTT Lab Robot Video of Lab Robot
Shinozawa et al. (2007) NTT Lab Robot Video of Lab Robot
Short et al. (2017) Bandit on Pioneer Pioneer with Bubble machine
Takeuchi et al. (2006) Honda ASIMO Microsoft Peedy
Tapus, Tapus & Mataric (2009) Bandit Virtual Bandit
Vossen et al. (2009) iCat Voice only
Wainer et al. (2006) Pioneer 2DX Virtual Pioneer 2DX
Wainer et al. (2007) Pioneer 2DX Virtual Pioneer 2DX
Williams et al. (2013) MIT AIDA AIDA on-screen App
Wrobel et al. (2013) Robulab Virtual Greta
Zlotowski (2010) Nabaztag Virtual Nabaztag
Table A.2: Continued
Author (Year) Task Performance Interaction Performance
Bainbridge et al. (2011), Exp. 1 PE >VE PE >VE
Bainbridge et al. (2011), Exp. 2 PE >VE PE >VE
Bainbridge et al. (2011), Exp. 3 PE >VE PE >VE
Bartneck (2003) PE >VE PE = VE
Bartneck et al. (2004) PE = VE PE = VE
Bremner and Leonard (2015) PE = VE N/A
Brooks et al, (2012) PE >VE N/A
Costa (2014), Exp. 1 PE >VE PE = VE
Costa (2014), Exp. 2 PE >VE PE = VE
Donahue and Scheutz (2015) PE >VE N/A
Fasola & Mataric (2013) PE = VE PE >VE
Fischer et al. (2012), Exp. 1 PE >VE N/A
Fischer et al. (2012), Exp. 2 PE >VE N/A
Fischer et al. (2012), Exp. 3 PE >VE N/A
Fridin and Belokopytov (2014) PE = VE PE >VE
Hasegawa et al. (2010) PE = VE PE >VE
Heerink et al. (2009) N/A PE >VE
Hoffmann & Krämer (2013), Exp. 1 PE = VE PE <VE
Hoffmann & Krämer (2013), Exp. 2 PE = VE PE >VE
Jost el al. (2014) PE >VE N/A
Jost et al. (2012), Exp. 1 PE >VE PE >VE
Jost et al. (2012), Exp. 2 PE >VE PE >VE
Ju and Sirkin (2010), Exp. 1 PE >VE PE <VE
Ju and Sirkin (2010), Exp. 2 PE >VE PE <VE
Jung and Lee (2004), Exp. 1 N/A PE >VE
Jung and Lee (2004), Exp. 2 N/A PE >VE
Kennedy et al. (2015) PE >VE PE >VE
Kidd & Breazeal (2004), Exp. 1 N/A PE >VE
Kidd & Breazeal (2004), Exp. 2 N/A PE >VE
Kiesler et al. (2008) PE >VE PE >VE
Komatsu et al. (2010), Exp. 1 PE = VE PE >VE
Komatsu et al. (2010), Exp. 2 PE = VE PE >VE
Kose et al. (2009) PE >VE PE = VE
Krogsager et al. (2014) PE <VE PE <VE
Lee et al. (2006), Exp. 1 N/A PE >VE
Lee et al. (2006), Exp. 2 N/A PE <VE
Lee et al. (2015) N/A PE = VE
Leite et al. (2008) N/A PE >VE
Levy-Tzedek et al. (2017) PE >VE PE >VE
Leyzberg et al. (2012) PE >VE PE >VE
Li and Chignell (2011) PE = VE N/A
Ligthart and Truong (2015) N/A PE = VE
Lohan et al. (2010) N/A PE >VE
Looije et al. (2012) N/A PE >VE
Looije, Neerincx, & Cnossen (2010) PE >VE PE <VE
Nomura (2009) PE = VE PE = VE
Pan and Steed (2016) PE >VE PE >VE
Pereira et al. (2008) N/A PE >VE
Powers et al. (2007) PE <VE PE >VE
Robins et al. (2006) PE >VE PE >VE
Segura et al. (2012) PE >VE PE >VE
Table A.3: The results of the reviewed studies broken down by task performance differences and interaction performance differences.
Author (Year) Task Performance Interaction Performance
Shahid et al. (2014) PE >VE PE >VE
Shinozawa and Reeves (2002), Exp. 1 PE >VE PE >VE
Shinozawa and Reeves (2002), Exp. 2 PE >VE PE >VE
Shinozawa and Reeves (2002), Exp. 3 PE >VE PE >VE
Shinozawa et al. (2007) PE >VE PE >VE
Short et al. (2017) PE >VE PE >VE
Takeuchi et al. (2006) PE >VE PE >VE
Tapus, Tapus & Mataric (2009) PE >VE PE >VE
Vossen et al. (2009) PE >VE PE >VE
Wainer et al. (2006) PE >VE PE >VE
Wainer et al. (2007) PE >VE PE >VE
Williams et al. (2013) PE >VE PE >VE
Wrobel et al. (2013) PE >VE PE >VE
Zlotowski (2010) PE <VE PE <VE
Table A.3: Continued
Author (Year) n Age Group Country
Bainbridge et al. (2011), Exp. 1 59 Adults US
Bainbridge et al. (2011), Exp. 2 59 Adults US
Bainbridge et al. (2011), Exp. 3 59 Adults US
Bartneck (2003) 53 Adults Netherlands
Bartneck et al. (2004) 56 Adults Netherlands
Bremner and Leonard (2015) 22 Adults UK
Brooks et al, (2012) 11 Adults US
Costa (2014), Exp. 1 20 Adults United Arab Emirates
Costa (2014), Exp. 2 40 Adults United Arab Emirates
Donahue and Scheutz (2015) 55 Adults US
Fasola & Mataric (2013) 33 Elderly Adults US
Fischer et al. (2012), Exp. 1 38 Adults Germany
Fischer et al. (2012), Exp. 2 14 Adults Germany
Fischer et al. (2012), Exp. 3 36 Adults Germany
Fridin and Belokopytov (2014) 13 Children Israel
Hasegawa et al. (2010) 75 Children Japan
Heerink et al. (2009) 40 Elderly Adults Netherlands
Hoffmann & Krämer (2013), Exp. 1 83 Adults Germany
Hoffmann & Krämer (2013), Exp. 2 83 Adults Germany
Jost el al. (2014) 67 Children and Adults France
Jost et al. (2012), Exp. 1 51 Children France
Jost et al. (2012), Exp. 2 52 Children France
Ju and Sirkin (2010), Exp. 1 179 Adults US
Ju and Sirkin (2010), Exp. 2 457 Adults US
Jung and Lee (2004), Exp. 1 36 Adults US
Jung and Lee (2004), Exp. 2 32 Adults US
Kennedy et al. (2015) 28 Children EU
Kidd & Breazeal (2004), Exp. 1 32 Adults US
Kidd & Breazeal (2004), Exp. 2 82 Adults US
Kiesler et al. (2008) 113 Adults US
Komatsu et al. (2010), Exp. 1 20 Children Japan
Komatsu et al. (2010), Exp. 2 40 Children Japan
Kose et al. (2009) 66 Children UK
Krogsager et al. (2014) 9 Children Denmark
Lee et al. (2006), Exp. 1 32 Adults US
Lee et al. (2006), Exp. 2 32 Adults US
Lee et al. (2015) 24 Adults US
Leite et al. (2008) 9 Children and Adults Portugal
Levy-Tzedek et al. (2017) 22 Adults Israel
Leyzberg et al. (2012) 100 Adults US
Li and Chignell (2011) 16 Adults Japan
Ligthart and Truong (2015) 40 Adults Netherlands
Lohan et al. (2010) 28 Adults Germany
Looije et al. (2012) 11 Children Netherlands
Looije, Neerincx, & Cnossen (2010) 24 Adults Netherlands
Nomura (2009) 37 Adults Japan
Pan and Steed (2016) 24 Adults UK
Pereira et al. (2008) 18 Children Portugal
Powers et al. (2007) 113 Adults US
Robins et al. (2006) 4 Children UK
Segura et al. (2012) 42 Adults UK
Table A.4: Demographic characteristics of the participant pools in the reviewed studies.
Author (Year) n Age Group Country
Shahid et al. (2014) 112 Children Netherlands and Pakistan
Shinozawa and Reeves (2002), Exp. 1 72 Adults Japan and US
Shinozawa and Reeves (2002), Exp. 2 72 Adults Japan and US
Shinozawa and Reeves (2002), Exp. 3 72 Adults Japan and US
Shinozawa et al. (2007) 178 Adults Japan
Short et al. (2017) 6 Children US
Takeuchi et al. (2006) 31 Adults Japan
Tapus, Tapus & Mataric (2009) 3 Elderly Adults US
Vossen et al. (2009) 76 Adults Netherlands
Wainer et al. (2006) 11 Adults US
Wainer et al. (2007) 21 Adults US
Williams et al. (2013) 44 Adults US
Wrobel et al. (2013) 19 Adults France
Zlotowski (2010) 16 Adults Finland
Table A.4: Continued
Self-Reported
Author (Year) Observed Measures Measures
Bainbridge et al. (2011), Exp 1 Task Performance, Interaction Performance Yes
Bainbridge et al. (2011), Exp. 2 Task Performance, Interaction Performance Yes
Bainbridge et al. (2011), Exp. 3 Task Performance, Interaction Performance Yes
Bartneck (2003) Task Performance Yes
Bartneck et al. (2004) Task Performance Yes
Bremner and Leonard (2015) Individual Behavior No
Brooks et al, (2012) Task Performance, Interaction Performance No
Costa (2014), Exp. 1 Individual Behavior, Interaction Performance Yes
Costa (2014), Exp. 2 individual Behavior, Interaction Performance Yes
Donahue and Scheutz (2015) Individual Behavior No
Fasola & Mataric (2013) Task Performance Yes
Fischer et al. (2012), Exp. 1 Task Performance, Interaction Performance No
Fischer et al. (2012), Exp. 2 Task Performance, Interaction Performance No
Fischer et al. (2012), Exp. 3 Task Performance, Interaction Performance No
Fridin and Belokopytov (2014) Task Performance, Interaction Performance Yes
Hasegawa et al. (2010) Task Performance Yes
Heerink et al. (2009) Individual Behavior Yes
Hoffmann & Krämer (2013), Exp. 1 Task Performance Yes
Hoffmann & Krämer (2013), Exp. 2 Task Performance Yes
Jost el al. (2014) Individual Behavior, Interaction Performance No
Jost et al. (2012), Exp. 1 Task Performance, Interaction Performance Yes
Jost et al. (2012), Exp. 2 N/A Yes
Ju and Sirkin (2010), Exp. 1 Task Performance Yes
Ju and Sirkin (2010), Exp. 2 Task Performance Yes
Jung and Lee (2004), Exp. 1 N/A Yes
Jung and Lee (2004), Exp. 2 N/A Yes
Kennedy et al. (2015) Task Performance Yes
Kidd & Breazeal (2004), Exp. 1 Individual Behavior, Interaction Performance Yes
Kidd & Breazeal (2004), Exp. 2 N/A Yes
Kiesler et al. (2008) Task Performance Yes
Komatsu et al. (2010), Exp. 1 Task Performance, Interaction Performance No
Komatsu et al. (2010), Exp. 2 Task Performance, Interaction Performance No
Kose et al. (2009) Task Performance, Interaction Performance Yes
Krogsager et al. (2014) Task Performance Yes
Lee et al. (2006), Exp. 1 N/A Yes
Lee et al. (2006), Exp. 2 N/A Yes
Lee et al. (2015) N/A Yes
Leite et al. (2008) N/A Yes
Levy-Tzedek et al. (2017) Task Performance Yes
Leyzberg et al. (2012) Task Performance Yes
Li and Chignell (2011) Task Performance Yes
Ligthart and Truong (2015) N/A Yes
Lohan et al. (2010) Task Performance, Interaction Performance No
Looije et al. (2012) Task Performance, Interaction Performance Yes
Looije, Neerincx, & Cnossen (2010) N/A Yes
Nomura (2009) Task Performance Yes
Pan and Steed (2016) Task Performance, Interaction Performance Yes
Pereira et al. (2008) N/A Yes
Powers et al. (2007) Task Performance, Interaction Performance Yes
Robins et al. (2006) Task Performance, Interaction Performance Yes
Segura et al. (2012) Task Performance, Interaction Performance Yes
Table A.5: Reviewed studies labeled with observed measures used (task performance, interaction performance, individual behavior) and whether or not self-reported measures were implemented.
Self-Reported
Author (Year) Observed Measures Measures
Shahid et al. (2014) Task Performance, Interaction Performance Yes
Shinozawa and Reeves (2002), Exp. 1 Task Performance Yes
Shinozawa and Reeves (2002), Exp. 2 Task Performance Yes
Shinozawa and Reeves (2002), Exp. 3 Task Performance Yes
Shinozawa et al. (2007) Task Performance Yes
Short et al. (2017) Task Performance, Interaction Performance Yes
Takeuchi et al. (2006) N/A Yes
Tapus, Tapus & Mataric (2009) Task Performance, Interaction Performance Yes
Vossen et al. (2009) Task Performance Yes
Wainer et al. (2006) Task Performance Yes
Wainer et al. (2007) Task Performance Yes
Williams et al. (2013) Task Performance, Interaction Performance Yes
Wrobel et al. (2013) N/A Yes
Zlotowski (2010) Task Performance Yes
Table A.5: Continued

References

  • Allen and Seaman [2007] I Elaine Allen and Christopher A Seaman. Likert scales and data analyses. Quality progress, 40(7):64, 2007.
  • Amos et al. [2016] Brandon Amos, Bartosz Ludwiczuk, and Mahadev Satyanarayanan.

    Openface: A general-purpose face recognition library with mobile applications.

    Technical report, CMU-CS-16-118, CMU School of Computer Science, 2016.
  • Anderson [2003] Michael L Anderson. Embodied cognition: A field guide. Artificial intelligence, 149(1):91–130, 2003.
  • Andrist et al. [2012] Sean Andrist, Tomislav Pejsa, Bilge Mutlu, and Michael Gleicher. Designing effective gaze mechanisms for virtual agents. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 705–714. ACM, 2012.
  • Andrist et al. [2013] Sean Andrist, Bilge Mutlu, and Michael Gleicher. Conversational gaze aversion for virtual agents. In International Workshop on Intelligent Virtual Agents, pages 249–262. Springer, 2013.
  • Andrist et al. [2014] Sean Andrist, Xiang Zhi Tan, Michael Gleicher, and Bilge Mutlu. Conversational gaze aversion for humanlike robots. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, pages 25–32. ACM, 2014.
  • Andrist et al. [2015] Sean Andrist, Bilge Mutlu, and Adriana Tapus. Look like me: matching robot personality via gaze to increase motivation. In Proceedings of the 33rd annual ACM conference on human factors in computing systems, pages 3603–3612. ACM, 2015.
  • Andrist et al. [2017] Sean Andrist, Michael Gleicher, and Bilge Mutlu. Looking coordinated: Bidirectional gaze mechanisms for collaborative interaction with virtual characters. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 2571–2582. ACM, 2017.
  • Antle [2009] Alissa N Antle. Lifelong interactions embodied child computer interaction: why embodiment matters. interactions, 16(2):27–30, 2009.
  • Argyle [1988] Michael Argyle. Bodily communication. 2nd. London: Methuen, page 10, 1988.
  • Argyle and Dean [1965] Michael Argyle and Janet Dean. Eye-contact, distance and affiliation. Sociometry, pages 289–304, 1965.
  • Asfahl [1992] C Asfahl. Robots and manufacturing automation. John Wiley & Sons, Inc., 1992.
  • Bailenson and Yee [2006] Jeremy N Bailenson and Nick Yee.

    A longitudinal study of task performance, head movements, subjective report, simulator sickness, and transformed social interaction in collaborative virtual environments.

    Presence: Teleoperators and Virtual Environments, 15(6):699–716, 2006.
  • Bailenson et al. [2001] Jeremy N Bailenson, Jim Blascovich, Andrew C Beall, and Jack M Loomis. Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence, 10(6):583–598, 2001.
  • Bainbridge et al. [2011] Wilma A Bainbridge, Justin W Hart, Elizabeth S Kim, and Brian Scassellati. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1):41–52, 2011.
  • Barrett [2006a] Lisa Feldman Barrett. Are emotions natural kinds? Perspectives on psychological science, 1(1):28–58, 2006a.
  • Barrett [2006b] Lisa Feldman Barrett. Solving the emotion paradox: Categorization and the experience of emotion. Personality and social psychology review, 10(1):20–46, 2006b.
  • Barsalou et al. [2003] Lawrence W Barsalou, Paula M Niedenthal, Aron K Barbey, and Jennifer A Ruppert. Social embodiment. Psychology of learning and motivation, 43:43–92, 2003.
  • Bartneck [2003] Christoph Bartneck. Interacting with an embodied emotional character. In Proceedings of the 2003 international conference on Designing pleasurable products and interfaces, pages 55–60. ACM, 2003.
  • Bartneck et al. [2004] Christoph Bartneck, Juliane Reichenbach, and van A Breemen. In your face, robot! the influence of a character’s embodiment on how users perceive its emotional expressions. In Proceedings of the Design and Emotion, pages 32–51, 2004.
  • Bartneck et al. [2009] Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, 1(1):71–81, 2009.
  • Bauer et al. [2008] Andrea Bauer, Dirk Wollherr, and Martin Buss. Human–robot collaboration: a survey. International Journal of Humanoid Robotics, 5(01):47–66, 2008.
  • Bemelmans et al. [2012] Roger Bemelmans, Gert Jan Gelderblom, Pieter Jonker, and Luc De Witte. Socially assistive robots in elderly care: A systematic review into effects and effectiveness. Journal of the American Medical Directors Association, 13(2):114–120, 2012.
  • Benner [1994] Patricia Benner. Interpretive phenomenology: Embodiment, caring, and ethics in health and illness. Sage publications, 1994.
  • Bickmore and Cassell [2001] T. Bickmore and J. Cassell. Relational agents: a model and implementation of building user trust. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 396–403. ACM, 2001.
  • Bickmore and Cassell [2005] Timothy Bickmore and Justine Cassell.

    Social dialongue with embodied conversational agents.

    In Advances in natural multimodal dialogue systems, pages 23–54. Springer, 2005.
  • Bickmore and Picard [2005] Timothy W Bickmore and Rosalind W Picard. Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction (TOCHI), 12(2):293–327, 2005.
  • Biocca [1997] Frank Biocca. The cyborg’s dilemma: Progressive embodiment in virtual environments [1]. Journal of Computer-Mediated Communication, 3(2):0–0, 1997.
  • Biocca and Nowak [2001] Frank Biocca and Kristine Nowak. Plugging your body into the telecommunication system: Mediated embodiment, media interfaces, and social virtual environments. Communication technology and society, pages 407–447, 2001.
  • Biocca et al. [2003] Frank Biocca, Chad Harms, and Judee K Burgoon. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators & virtual environments, 12(5):456–480, 2003.
  • Blum and Langley [1997] Avrim L Blum and Pat Langley. Selection of relevant features and examples in machine learning. Artificial intelligence, 97(1):245–271, 1997.
  • Bradley and Lang [1994] Margaret M Bradley and Peter J Lang. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1):49–59, 1994.
  • Breazeal [2003] Cynthia Breazeal. Toward sociable robots. Robotics and autonomous systems, 42(3):167–175, 2003.
  • Breazeal and Velasquez [1999] Cynthia Breazeal and J Velasquez. Robot in society: friend or appliance. In Proceedings of the 1999 Autonomous Agents Workshop on Emotion-Based Agent Architectures, pages 18–26, 1999.
  • Breazeal et al. [2005] Cynthia Breazeal, Cory D Kidd, Andrea Lockerd Thomaz, Guy Hoffman, and Matt Berlin. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 708–713. IEEE, 2005.
  • Breazeal [2004] Cynthia L Breazeal. Designing sociable robots. MIT press, 2004.
  • Bremner and Leonards [2015] Paul Bremner and Ute Leonards. Speech and gesture emphasis effects for robotic and human communicators: a direct comparison. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 255–262. ACM, 2015.
  • Broekens et al. [2009] Joost Broekens, Marcel Heerink, Henk Rosendal, et al. Assistive social robots in elderly care: a review. Gerontechnology, 8(2):94–103, 2009.
  • Brooks et al. [2012] Douglas Brooks, Yu-ping Chen, and Ayanna M Howard. Simulation versus embodied agents: Does either induce better human adherence to physical therapy exercise? In 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pages 1715–1720. IEEE, 2012.
  • Brooks [2002] Rodney Brooks. Flesh and machines: How robots will change us. Vintage, 2002.
  • Brooks [1989] Rodney A Brooks. A robot that walks; emergent behaviors from a carefully evolved network. Neural computation, 1(2):253–262, 1989.
  • Brooks [1990] Rodney A Brooks. Elephants don’t play chess. Robotics and autonomous systems, 6(1):3–15, 1990.
  • Brunner [1979] Lawrence J Brunner. Smiles can be back channels. Journal of Personality and Social Psychology, 37(5):728, 1979.
  • Burgoon [1991] Judee K Burgoon. Relational message interpretations of touch, conversational distance, and posture. Journal of Nonverbal behavior, 15(4):233–259, 1991.
  • Burgoon et al. [2000] Judee K Burgoon, Joseph A Bonito, Bjorn Bengtsson, Carl Cederberg, Magnus Lundeberg, and L Allspach. Interactivity in human–computer interaction: A study of credibility, understanding, and influence. Computers in human behavior, 16(6):553–574, 2000.
  • Byom and Mutlu [2013] Lindsey Jacquelyn Byom and Bilge Mutlu. Theory of mind: Mechanisms, methods, and new directions. Frontiers in human neuroscience, 7:413, 2013.
  • Calder et al. [2002] Andrew J Calder, Andrew D Lawrence, Jill Keane, Sophie K Scott, Adrian M Owen, Ingrid Christoffels, and Andrew W Young. Reading the mind from eye gaze. Neuropsychologia, 40(8):1129–1138, 2002.
  • Caldwell and O’Reilly III [1982] David F Caldwell and Charles A O’Reilly III. Responses to failure: The effects of choice and responsibility on impression management. Academy of management journal, 25(1):121–136, 1982.
  • Caprara et al. [1993] Gian Vittorio Caprara, Claudio Barbaranelli, Laura Borgogni, and Marco Perugini. The ‘big five questionnaire’: A new questionnaire to assess the five factor model. Personality and individual Differences, 15(3):281–288, 1993.
  • Cassell [2001] Justine Cassell. Embodied conversational agents: representation and intelligence in user interfaces. AI magazine, 22(4):67, 2001.
  • Cassell et al. [1999] Justine Cassell, Timothy Bickmore, Mark Billinghurst, Lee Campbell, Kenny Chang, Hannes Vilhjálmsson, and Hao Yan. Embodiment in conversational interfaces: Rea. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 520–527. ACM, 1999.
  • Chang et al. [2005] Jyh-Jong Chang, Tung-I Wu, Wen-Lan Wu, and Fong-Chin Su. Kinematical measure for spastic reaching in children with cerebral palsy. Clinical Biomechanics, 20(4):381–388, 2005.
  • Chollet et al. [2015] Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, and Stefan Scherer. Public speaking training with a multimodal interactive virtual audience framework. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pages 367–368. ACM, 2015.
  • Clabaugh et al. [2015] Caitlyn Clabaugh, Gisele Ragusa, Fei Sha, and Maja Matarić. Designing a socially assistive robot for personalized number concepts learning in preschool children. In 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pages 314–319. IEEE, 2015.
  • Clark [2007] Andy Clark. Re-inventing ourselves: The plasticity of embodiment, sensing, and mind. Journal of Medicine and Philosophy, 32(3):263–282, 2007.
  • Clark [2008] Andy Clark. Supersizing the mind: Embodiment, action, and cognitive extension. OUP USA, 2008.
  • Coltin and Veloso [2014] Brian Coltin and Manuela Veloso. Online pickup and delivery planning with transfers for mobile robots. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 5786–5791. IEEE, 2014.
  • Costa et al. [2016] Sandra Costa, Alberto Brunete, Byung-Chull Bae, and Nikolaos Mavridis. Emotional storytelling using virtual and robotic agents. arXiv preprint arXiv:1607.05327, 2016.
  • Cozby and Bates [2017] Paul C Cozby and Scott Bates. Methods in behavioral research. McGraw-Hill Education, 13th edition edition, 2017.
  • Crum et al. [1993] Rosa M Crum, James C Anthony, Susan S Bassett, and Marshal F Folstein. Population-based norms for the mini-mental state examination by age and educational level. Jama, 269(18):2386–2391, 1993.
  • Csordas [1990] Thomas J Csordas. Embodiment as a paradigm for anthropology. Ethos, 18(1):5–47, 1990.
  • Csordas [1994] Thomas J Csordas. Embodiment and experience: The existential ground of culture and self, volume 2. Cambridge University Press, 1994.
  • Damasio [1999] Antonio R Damasio. The feeling of what happens: Body and emotion in the making of consciousness. Houghton Mifflin Harcourt, 1999.
  • Dautenhahn [1997] Kerstin Dautenhahn. I could be you: The phenomenological dimension of social understanding. Cybernetics & Systems, 28(5):417–453, 1997.
  • Dautenhahn [2001] Kerstin Dautenhahn. Editorial-socially intelligent agents-the human in the loop. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31(5):345–348, 2001.
  • Dautenhahn [2002] Kerstin Dautenhahn. Design spaces and niche spaces of believable social robots. In Robot and Human Interactive Communication, 2002. Proceedings. 11th IEEE International Workshop on, pages 192–197. IEEE, 2002.
  • Deng and Matarić [2017] Eric C. Deng and Maja J. Matarić. Mime-inspired behaviors in minimal social robots. In 2017 ACM CHI Conference on Human Factors in Computing Systems Workshop on What Can Actors Teach Robots, May 2017.
  • Deng and Matarić [2018] Eric C. Deng and Maja J. Matarić. Object-based generative methods for embodied gestures in socially interactive robots. In AAAI Spring Symposium on Designing the User Experience of Artificial Intelligence, Mar 2018.
  • Deng et al. [2018] Eric C Deng, Bilge Mutlu, and Maja J Matarić. Formalizing the design space and product development cycle for socially interactive robots. Workshop on Social Robots in the Wild at the 2018 ACM Conference on Human-Robot Interaction (HRI), 2018.
  • DeVault et al. [2014] David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Ed Fast, Alesia Gainer, Kallirroi Georgila, Jon Gratch, Arno Hartholt, Margaux Lhommet, et al. Simsensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pages 1061–1068. International Foundation for Autonomous Agents and Multiagent Systems, 2014.
  • Donahue and Scheutz [2015] Thomas J Donahue and Matthias Scheutz. Investigating the effects of robot affect and embodiment on attention and natural language of human teammates. In Cognitive Infocommunications (CogInfoCom), 2015 6th IEEE International Conference on, pages 397–402. IEEE, 2015.
  • Duffy and Joue [2000] Brian R Duffy and Gina Joue. Intelligent robots: The question of embodiment. In Proc. of the Brain-Machine Workshop, 2000.
  • Duncan [1972] Starkey Duncan. Some signals and rules for taking speaking turns in conversations. Journal of personality and social psychology, 23(2):283, 1972.
  • Durlach and Slater [2000] Nat Durlach and Mel Slater. Presence in shared virtual environments and virtual togetherness. Presence: Teleoperators and Virtual Environments, 9(2):214–217, 2000.
  • Dym et al. [2005] Clive L Dym, Alice M Agogino, Ozgur Eris, Daniel D Frey, and Larry J Leifer. Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94(1):103–120, 2005.
  • Edelman [2004] Gerald M Edelman. Wider than the sky: The phenomenal gift of consciousness. Yale University Press, 2004.
  • Edmondson and McManus [2007] Amy C Edmondson and Stacy E McManus. Methodological fit in management field research. Academy of management review, 32(4):1246–1264, 2007.
  • Ekman and Friesen [1975] Paul Ekman and Wallace V Friesen. Pictures of facial affect. consulting psychologists press, 1975.
  • Elaine S. Short and Matarić [2017] David J. Feil-Seifer Elaine S. Short, Eric C. Deng and Maja J. Matarić. Understanding agency in interactions between children with autism and socially assistive robots. Transactions on Human-Robot Interaction, Dec 2017.
  • Emery [2000] Nathan J Emery. The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24(6):581–604, 2000.
  • Fasola and Mataric [2013] Juan Fasola and Maja Mataric. A socially assistive robot exercise coach for the elderly. Journal of Human-Robot Interaction, 2(2):3–32, 2013.
  • Feil-Seifer and Mataric [2005] David Feil-Seifer and Maja J Mataric. Defining socially assistive robotics. In 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005., pages 465–468. IEEE, 2005.
  • Fieser and Dowden [2011] James Fieser and Bradley Dowden. Internet encyclopedia of philosophy. 2011.
  • Fischer et al. [2012] Kerstin Fischer, Katrin Lohan, and Kilian Foth. Levels of embodiment: Linguistic analyses of factors influencing hri. In Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on, pages 463–470. IEEE, 2012.
  • Fong et al. [2003] Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. A survey of socially interactive robots. Robotics and autonomous systems, 42(3):143–166, 2003.
  • Forlizzi and DiSalvo [2006] Jodi Forlizzi and Carl DiSalvo. Service robots in the domestic environment: a study of the roomba vacuum in the home. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pages 258–265. ACM, 2006.
  • Francis et al. [1992] Leslie J Francis, Laurence B Brown, and Ronald Philipchalk. The development of an abbreviated form of the revised eysenck personality questionnaire (epqr-a): Its use among students in england, canada, the usa and australia. Personality and individual differences, 13(4):443–449, 1992.
  • Fridin and Belokopytov [2014] Marina Fridin and Mark Belokopytov. Embodied robot versus virtual agent: Involvement of preschool children in motor task performance. International Journal of Human-Computer Interaction, 30(6):459–469, 2014.
  • Frischen et al. [2007] Alexandra Frischen, Andrew P Bayliss, and Steven P Tipper. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychological bulletin, 133(4):694, 2007.
  • Fu et al. [1987] King Sun Fu, Ralph Gonzalez, and CS George Lee. Robotics: Control Sensing. Vis. Tata McGraw-Hill Education, 1987.
  • Furusho and Masubuchi [1986] J Furusho and M Masubuchi. Control of a dynamical biped locomotion system for steady walking. Journal of Dynamic Systems, Measurement, and Control, 108(2):111–118, 1986.
  • Gallace and Spence [2010] Alberto Gallace and Charles Spence. The science of interpersonal touch: an overview. Neuroscience & Biobehavioral Reviews, 34(2):246–259, 2010.
  • Garau et al. [2005] Maia Garau, Mel Slater, David-Paul Pertaub, and Sharif Razzaque. The responses of people to virtual humans in an immersive virtual environment. Presence: Teleoperators and Virtual Environments, 14(1):104–116, 2005.
  • Gaschler et al. [2012] Andre Gaschler, Sören Jentzsch, Manuel Giuliani, Kerstin Huth, Jan de Ruiter, and Alois Knoll. Social behavior recognition using body posture and head pose for human-robot interaction. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 2128–2133. IEEE, 2012.
  • Gibson [1982] Eleanor J Gibson. The concept of affordances in development: The renascence of functionalism. In The concept of development: The Minnesota symposia on child psychology, volume 15, pages 55–81. Lawrence Erlbaum Hillsdale, NJ, 1982.
  • Goetz et al. [2003] Jennifer Goetz, Sara Kiesler, and Aaron Powers. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on, pages 55–60. Ieee, 2003.
  • Goffman [1959] Erving Goffman. The presentation of self in. Butler, Bodies that Matter, 1959.
  • Goodrich and Schultz [2007] Michael A Goodrich and Alan C Schultz. Human-robot interaction: a survey. Foundations and trends in human-computer interaction, 1(3):203–275, 2007.
  • Goodwin [2000] Charles Goodwin. Action and embodiment within situated human interaction. Journal of pragmatics, 32(10):1489–1522, 2000.
  • Gordon et al. [2015] Goren Gordon, Cynthia Breazeal, and Susan Engel. Can children catch curiosity from a social robot? In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 91–98. ACM, 2015.
  • Gover [1996] Mark R Gover. The embodied mind: Cognitive science and human experience (book). Mind, Culture, and Activity, 3(4):295–299, 1996.
  • Gratch et al. [2015] Jonathan Gratch, Susan Hill, Louis-Philippe Morency, David Pynadath, and David Traum. Exploring the implications of virtual human research for human-robot teams. In International Conference on Virtual, Augmented and Mixed Reality, pages 186–196. Springer, 2015.
  • Greczek et al. [2014] Jillian Greczek, Elaine Short, Caitlyn E Clabaugh, Katelyn Swift-Spong, and Maja Mataric. Socially assistive robotics for personalized education for children. In AAAI Fall Symposium on Artificial Intelligence and Human-Robot Interaction (AI-HRI), 2014.
  • Gunawardena [1995] Charlotte N Gunawardena. Social presence theory and implications for interaction and collaborative learning in computer conferences. International journal of educational telecommunications, 1(2):147–166, 1995.
  • Gunawardena and Zittle [1997] Charlotte N Gunawardena and Frank J Zittle. Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. American journal of distance education, 11(3):8–26, 1997.
  • Haddadin et al. [2009] Sami Haddadin, Alin Albu-Schäffer, and Gerd Hirzinger. Requirements for safe robots: Measurements, analysis and new insights. The International Journal of Robotics Research, 28(11-12):1507–1527, 2009.
  • Hall [1963] Edward T Hall. A system for the notation of proxemic behavior. American anthropologist, 65(5):1003–1026, 1963.
  • Hanna and Brennan [2007] Joy E Hanna and Susan E Brennan. Speakers’ eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57(4):596–615, 2007.
  • Hart [2006] Sandra G Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, volume 50, pages 904–908. Sage Publications Sage CA: Los Angeles, CA, 2006.
  • Hartman et al. [2006] Catharina A Hartman, Ellen Luteijn, Marike Serra, and Ruud Minderaa. Refinement of the children’s social behavior questionnaire (csbq): an instrument that describes the diverse problems seen in milder forms of pdd. Journal of Autism and Developmental Disorders, 36(3):325–342, 2006.
  • Hasegawa et al. [2010] Dai Hasegawa, Justine Cassell, and Kenji Araki. The role of embodiment and perspective in direction-giving systems. In AAAI Fall Symposium: Dialog with Robots, 2010.
  • Hayashi et al. [2007] Kotaro Hayashi, Daisuke Sakamoto, Takayuki Kanda, Masahiro Shiomi, Satoshi Koizumi, Hiroshi Ishiguro, Tsukasa Ogasawara, and Norihiro Hagita. Humanoid robots as a passive-social medium-a field experiment at a train station. In Human-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on, pages 137–144. IEEE, 2007.
  • Hayduk and Mainprize [1980] Leslie A Hayduk and Steven Mainprize. Personal space of the blind. Social Psychology Quarterly, pages 216–223, 1980.
  • Heerink et al. [2010] Marcel Heerink, Ben Kröse, Vanessa Evers, and Bob Wielinga. Assessing acceptance of assistive social agent technology by older adults: the almere model. International journal of social robotics, 2(4):361–375, 2010.
  • Heidegger [1973] Martin Heidegger. Art and space. Man and World, 6(1):3–8, 1973.
  • Hendriks-Jansen [1996] Horst Hendriks-Jansen. Catching ourselves in the act: Situated activity, interactive emergence, evolution, and human thought. MIT Press, 1996.
  • Hietanen [1999] Jari K Hietanen. Does your gaze direction and head orientation shift my visual attention? Neuroreport, 10(16):3443–3447, 1999.
  • Hillis [1999] Ken Hillis. Digital sensations: Space, identity, and embodiment in virtual reality. U of Minnesota Press, 1999.
  • Hinds et al. [2004] Pamela J Hinds, Teresa L Roberts, and Hank Jones. Whose job is it anyway? a study of human-robot interaction in a collaborative task. Human-Computer Interaction, 19(1):151–181, 2004.
  • Hinkin [1998] Timothy R Hinkin. A brief tutorial on the development of measures for use in survey questionnaires. Organizational research methods, 1(1):104–121, 1998.
  • Hoffmann and Krämer [2013] Laura Hoffmann and Nicole C Krämer. Investigating the effects of physical and virtual embodiment in task-oriented and conversational contexts. International Journal of Human-Computer Studies, 71(7):763–774, 2013.
  • Holz et al. [2009] Thomas Holz, Mauro Dragone, and Gregory MP O’Hare. Where robots and virtual agents meet. International Journal of Social Robotics, 1(1):83–93, 2009.
  • Hoonhout [2002] J Hoonhout. Development of a rating scale to determine the enjoyability of user interactions with consumer devices. Technical report, Technical report, Philips Research, 2002.
  • Huang and Mutlu [2012] Chien-Ming Huang and Bilge Mutlu. Robot behavior toolkit: generating effective social behaviors for robots. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 25–32. ACM, 2012.
  • Huang and Mutlu [2013] Chien-Ming Huang and Bilge Mutlu. Modeling and evaluating narrative gestures for humanlike robots. In Robotics: Science and Systems, pages 57–64, 2013.
  • Huang and Mutlu [2014] Chien-Ming Huang and Bilge Mutlu. Learning-based modeling of multimodal behaviors for humanlike robots. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, pages 57–64. ACM, 2014.
  • Huang and Mutlu [2016] Chien-Ming Huang and Bilge Mutlu. Anticipatory robot control for efficient human-robot collaboration. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pages 83–90. IEEE Press, 2016.
  • Huang et al. [2015a] Chien-Ming Huang, Sean Andrist, Allison Sauppé, and Bilge Mutlu. Using gaze patterns to predict task intent in collaboration. Frontiers in psychology, 6:1049, 2015a.
  • Huang et al. [2015b] Chien-Ming Huang, Maya Cakmak, and Bilge Mutlu. Adaptive coordination strategies for human-robot handovers. In Robotics: Science and Systems, 2015b.
  • Inoue et al. [2008] Kaoru Inoue, Kazuyoshi Wada, and Yuko Ito. Effective application of paro: Seal type robots for disabled people in according to ideas of occupational therapists. In International Conference on Computers for Handicapped Persons, pages 1321–1324. Springer, 2008.
  • Jackson and Schuler [1985] Susan E Jackson and Randall S Schuler. A meta-analysis and conceptual critique of research on role ambiguity and role conflict in work settings. Organizational behavior and human decision processes, 36(1):16–78, 1985.
  • Jones [2006] Joseph L Jones. Robots at the tipping point: the road to irobot roomba. IEEE Robotics & Automation Magazine, 13(1):76–78, 2006.
  • Jones and Yarbrough [1985] Stanley E Jones and A Elaine Yarbrough. A naturalistic study of the meanings of touch. Communications Monographs, 52(1):19–56, 1985.
  • Jost et al. [2012a] Céline Jost, Vanessa André, Brigitte Le Pévédic, Alban Lemasson, Martine Hausberger, and Dominique Duhaut. Ethological evaluation of human-robot interaction: are children more efficient and motivated with computer, virtual agent or robots? In Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, pages 1368–1373. IEEE, 2012a.
  • Jost et al. [2012b] Céline Jost, Brigitte Le Pévédic, and Dominique Duhaut. Robot is best to play with human! In RO-MAN 2012-21st IEEE International Symposium on Robot and Human Interactive Communication, 2012b.
  • Jost et al. [2014] Céline Jost, Marine Grandgeorge, Brigitte Le Pévédic, and Dominique Duhaut. Robot or tablet: Users’ behaviors on a memory game. In Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, pages 1050–1055. IEEE, 2014.
  • Ju and Sirkin [2010] Wendy Ju and David Sirkin. Animate objects: How physical motion encourages public interaction. In International Conference on Persuasive Technology, pages 40–51. Springer, 2010.
  • Jung et al. [2017] Merel M Jung, Mannes Poel, Dennis Reidsma, and Dirk KJ Heylen. a first step toward the automatic understanding of social touch for naturalistic human–robot interaction. Frontiers in ICT, 4:3, 2017.
  • Jung and Lee [2004] Younbo Jung and Kwan Min Lee. Effects of physical embodiment on social presence of social robots. Proceedings of PRESENCE, pages 80–87, 2004.
  • Jussim [1991] Lee Jussim. Social perception and social reality: A reflection-construction model. Psychological review, 98(1):54, 1991.
  • Kalegina et al. [2018] Alisa Kalegina, Grace Schroeder, Aidan Allchin, Keara Berlin, and Maya Cakmak. Characterizing the design space of rendered robot faces. In Proceedings of the 2018 ACM/IEEE international conference on Human-robot interaction, 2018.
  • Kanda et al. [2004] Takayuki Kanda, Takayuki Hirano, Daniel Eaton, and Hiroshi Ishiguro. Interactive robots as social partners and peer tutors for children: A field trial. Human-computer interaction, 19(1):61–84, 2004.
  • Kant and Jaki [1981] Immanuel Kant and Stanley L Jaki. Universal natural history and theory of the heavens. Edinburgh: Scottish Academic Press, 1981., 1, 1981.
  • Kennedy et al. [2015] James Kennedy, Paul Baxter, and Tony Belpaeme. The robot who tried too hard: Social behaviour of a robot tutor can negatively affect child learning. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 67–74. ACM, 2015.
  • Kennedy et al. [2007] William G Kennedy, Magdalena D Bugajska, Matthew Marge, William Adams, Benjamin R Fransen, Dennis Perzanowski, Alan C Schultz, and J Gregory Trafton. Spatial representation and reasoning for human-robot collaboration. In AAAI, volume 7, pages 1554–1559, 2007.
  • Kidd and Breazeal [2004] Cory D Kidd and Cynthia Breazeal. Effect of a robot on user perceptions. In Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, volume 4, pages 3559–3564. IEEE, 2004.
  • Kiesler et al. [2008] Sara Kiesler, Aaron Powers, Susan R Fussell, and Cristen Torrey. Anthropomorphic interactions with a robot and robot-like agent. Social Cognition, 26(2):169, 2008.
  • Kilteni et al. [2012] Konstantina Kilteni, Raphaela Groten, and Mel Slater. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments, 21(4):373–387, 2012.
  • Kim and Biocca [1997] Taeyong Kim and Frank Biocca. Telepresence via television: Two dimensions of telepresence may have different connections to memory and persuasion.[1]. Journal of Computer-Mediated Communication, 3(2):0–0, 1997.
  • Klamer et al. [2010] Tineke Klamer, Somaya Ben Allouch, and Dirk Heylen. ‘adventures of harvey’–use, acceptance of and relationship building with a social robot in a domestic environment. In International Conference on Human-Robot Personal Relationship, pages 74–82. Springer, 2010.
  • Klein [2003] Lisa R Klein. Creating virtual product experiences: The role of telepresence. Journal of interactive Marketing, 17(1):41–55, 2003.
  • Knapp et al. [2013] Mark L Knapp, Judith A Hall, and Terrence G Horgan. Nonverbal communication in human interaction. Cengage Learning, 2013.
  • Komatsu [2010] Takanori Komatsu. Comparison an On-screen Agent with a Robotic Agent in an Everyday Interaction Style: How to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent. INTECH Open Access Publisher, 2010.
  • Kose-Bagci et al. [2009] Hatice Kose-Bagci, Ester Ferrari, Kerstin Dautenhahn, Dag Sverre Syrdal, and Chrystopher L Nehaniv. Effects of embodiment and gestures on social interaction in drumming games with a humanoid robot. Advanced Robotics, 23(14):1951–1996, 2009.
  • Krämer [2005] Nicole C Krämer. Social communicative effects of a virtual program guide. In International Workshop on Intelligent Virtual Agents, pages 442–453. Springer, 2005.
  • Krauss [1998] Robert M Krauss. Why do we gesture when we speak? Current Directions in Psychological Science, 7(2):54–54, 1998.
  • Krippendorff [2004] Klaus Krippendorff. Reliability in content analysis. Human communication research, 30(3):411–433, 2004.
  • Krogsager et al. [2014] Anders Krogsager, Nicolaj Segato, and Matthias Rehm. Backchannel head nods in danish first meeting encounters with a humanoid robot: The role of physical embodiment. In International Conference on Human-Computer Interaction, pages 651–662. Springer, 2014.
  • Langen et al. [1994] Pauline A Langen, Jeffrey S Katz, Gayle Dempsey, and James Pompano. Remote monitoring of high-risk patients using artificial intelligence, October 18 1994. US Patent 5,357,427.
  • Lasota and Shah [2015] Przemyslaw A Lasota and Julie A Shah. Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Human factors, 57(1):21–33, 2015.
  • Lazar et al. [2017] Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. Research methods in human-computer interaction. Morgan Kaufmann, 2017.
  • Lee et al. [2015] Jee Yoon Lee, Jung Ju Choi, and Sonya S Kwak. The impact of user control design types on people’s perception of a robot. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, pages 19–20. ACM, 2015.
  • Lee et al. [2004] KM Lee, N Park, and H Song. Can a robot be perceived as a developing creature?: Effects of artificial developments on social presence and social responses toward robots in human-robot interaction. In International Communication Association conference, 2004.
  • Lee [2004] Kwan Min Lee. Presence, explicated. Communication theory, 14(1):27–50, 2004.
  • Lee et al. [2006] Kwan Min Lee, Younbo Jung, Jaywoo Kim, and Sang Ryong Kim. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human-Computer Studies, 64(10):962–973, 2006.
  • Lee and Nicholls [1999] Mark H Lee and Howard R Nicholls. Review article tactile sensing for mechatronics’ state of the art survey. Mechatronics, 9(1):1–31, 1999.
  • Lee et al. [2005] Sau-lai Lee, Ivy Yee-man Lau, Sara Kiesler, and Chi-Yue Chiu. Human mental models of humanoid robots. In Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, pages 2767–2772. IEEE, 2005.
  • Lee et al. [2012] Wan-Ju Lee, Chi-Wen Huang, Chia-Jung Wu, Shing-Tsaan Huang, and Gwo-Dong Chen. The effects of using embodied interactions to improve learning performance. In Advanced learning technologies (icalt), 2012 ieee 12th international conference on, pages 557–559. IEEE, 2012.
  • Leite et al. [2008] Iolanda Leite, André Pereira, Carlos Martinho, and Ana Paiva. Are emotional robots more fun to play with? In Robot and human interactive communication, 2008. RO-MAN 2008. The 17th IEEE international symposium on, pages 77–82. IEEE, 2008.
  • Levinson et al. [2011] Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dolson, David Held, Soeren Kammel, J Zico Kolter, Dirk Langer, Oliver Pink, Vaughan Pratt, et al. Towards fully autonomous driving: Systems and algorithms. In Intelligent Vehicles Symposium (IV), 2011 IEEE, pages 163–168. IEEE, 2011.
  • Levy-Tzedek et al. [2017] Shelly Levy-Tzedek, Sigal Berman, Yehuda Stiefel, Ehud Sharlin, James Young, and Daniel Rea. Robotic mirror game for movement rehabilitation. In Virtual Rehabilitation (ICVR), 2017 International Conference on, pages 1–2. IEEE, 2017.
  • Leyzberg et al. [2012] Daniel Leyzberg, Samuel Spaulding, Mariya Toneva, and Brian Scassellati. The physical presence of a robot tutor increases cognitive learning gains. 2012.
  • Li and Chignell [2011] Jamy Li and Mark Chignell. Communication of emotion in social robots through simple head and arm movements. International Journal of Social Robotics, 3(2):125–142, 2011.
  • Ligthart and Truong [2015] Mike Ligthart and Khiet P Truong. Selecting the right robot: Influence of user attitude, robot sociability and embodiment on user preferences. In Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on, pages 682–687. IEEE, 2015.
  • Likert [1932] Rensis Likert. A technique for the measurement of attitudes. Archives of psychology, 1932.
  • Lohan et al. [2010] Katrin Solveig Lohan, Sebastian Gieselmann, Anna-Lisa Vollmer, Katharina Rohlfing, and Britta Wrede. Does embodiment affect tutoring behavior. In IEEE international conference on development and learning (ICDL) conference, 2010.
  • Lombard and Ditton [1997] Matthew Lombard and Theresa Ditton. At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication, 3(2):0–0, 1997.
  • Lombard et al. [2000] Matthew Lombard, Theresa B Ditton, Daliza Crane, Bill Davis, Gisela Gil-Egui, Karl Horvath, Jessica Rossman, and S Park. Measuring presence: A literature-based approach to the development of a standardized paper-and-pencil instrument. In Third international workshop on presence, delft, the netherlands, volume 240, pages 2–4, 2000.
  • Longo et al. [2008] Matthew R Longo, Friederike Schüür, Marjolein PM Kammers, Manos Tsakiris, and Patrick Haggard. What is embodiment? a psychometric approach. Cognition, 107(3):978–998, 2008.
  • Looije et al. [2010] Rosemarijn Looije, Mark A Neerincx, and Fokie Cnossen. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies, 68(6):386–397, 2010.
  • Looije et al. [2012] Rosemarijn Looije, Anna van der Zalm, Mark A Neerincx, and Robbert-Jan Beun. Help, I need some body the effect of embodiment on playful learning. IEEE, 2012.
  • Louise Barriball and While [1994] K Louise Barriball and Alison While. Collecting data using a semi-structured interview: a discussion paper. Journal of advanced nursing, 19(2):328–335, 1994.
  • Magee and Galinsky [2008] Joe C Magee and Adam D Galinsky. 8 social hierarchy: The self-reinforcing nature of power and status. Academy of Management annals, 2(1):351–398, 2008.
  • Mainprice and Berenson [2013] Jim Mainprice and Dmitry Berenson. Human-robot collaborative manipulation planning using early prediction of human motion. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 299–306. IEEE, 2013.
  • Maitin-Shepard et al. [2010] Jeremy Maitin-Shepard, Marco Cusumano-Towner, Jinna Lei, and Pieter Abbeel. Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages 2308–2315. IEEE, 2010.
  • Mantovani and Riva [1999] Giuseppe Mantovani and Giuseppe Riva. “Real” presence: how different ontologies generate different criteria for presence, telepresence, and virtual presence. Presence: Teleoperators and Virtual Environments, 8(5):540–550, 1999.
  • Mason et al. [2005] Malia F Mason, Elizabeth P Tatkow, and C Neil Macrae. The look of love: Gaze shifts and person perception. Psychological Science, 16(3):236–239, 2005.
  • Mason and Salisbury Jr [1985] Matthew T Mason and J Kenneth Salisbury Jr. Robot hands and the mechanics of manipulation. 1985.
  • Matthias et al. [2011] Bjoern Matthias, Soenke Kock, Henrik Jerregard, Mats Källman, and Ivan Lundberg. Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept. In Assembly and Manufacturing (ISAM), 2011 IEEE International Symposium on, pages 1–6. IEEE, 2011.
  • Maturana and Varela [1987] Humberto R Maturana and Francisco J Varela. The tree of knowledge: The biological roots of human understanding. New Science Library/Shambhala Publications, 1987.
  • Maturana and Varela [1991] Humberto R Maturana and Francisco J Varela. Autopoiesis and cognition: The realization of the living, volume 42. Springer Science & Business Media, 1991.
  • McClave [2000] Evelyn Z McClave. Linguistic functions of head movements in the context of speech. Journal of pragmatics, 32(7):855–878, 2000.
  • McCroskey and McCain [1974] James C McCroskey and Thomas A McCain. The measurement of interpersonal attraction. 1974.
  • Mcgrath [1995] E Mcgrath. Methodology matters: Doing research in the behavioral and social sciences. In Readings in Human-Computer Interaction: Toward the Year 2000 (2nd ed. Citeseer, 1995.
  • McNeill [2008] David McNeill. Gesture and thought. University of Chicago press, 2008.
  • McQuown and Bateson [1971] N.A. McQuown and G. Bateson. The Natural History of an Interview. Microfilm collection of manuscripts on cultural anthropology, ser 15, no. 95-98. University of Chicago Library, 1971.
  • Mead and Matarić [2016] Ross Mead and Maja J Matarić. Perceptual models of human-robot proxemics. In Experimental Robotics, pages 261–276. Springer, 2016.
  • Mead and Matarić [2017] Ross Mead and Maja J Matarić. Autonomous human–robot proxemics: socially aware navigation based on interaction potential. Autonomous Robots, 41(5):1189–1201, 2017.
  • Mead et al. [2013] Ross Mead, Amin Atrash, and Maja J Matarić.

    Automated proxemic feature extraction and behavior recognition: Applications in human-robot interaction.

    International Journal of Social Robotics, 5(3):367–378, 2013.
  • Mehrabian [1969] Albert Mehrabian. Significance of posture and position in the communication of attitude and status relationships. Psychological Bulletin, 71(5):359, 1969.
  • Mehrabian [1972] Albert Mehrabian. Nonverbal communication. Transaction Publishers, 1972.
  • Merchant et al. [2014] Zahira Merchant, Ernest T Goetz, Lauren Cifuentes, Wendy Keeney-Kennicutt, and Trina J Davis. Effectiveness of virtual reality-based instruction on students’ learning outcomes in k-12 and higher education: A meta-analysis. Computers & Education, 70:29–40, 2014.
  • Merleau-Ponty et al. [2004] Maurice Merleau-Ponty, Oliver Davis, and Thomas Baldwin. The world of perception. Cambridge Univ Press, 2004.
  • Montagu and Matson [1979] Ashley Montagu and Floyd W Matson. The human connection. McGraw-Hill, 1979.
  • Moravec [1988a] Hans Moravec. Mind children, volume 375. Cambridge Univ Press, 1988a.
  • Moravec [1988b] Hans P Moravec. Sensor fusion in certainty grids for mobile robots. AI magazine, 9(2):61, 1988b.
  • Mori [1970] Masahiro Mori. The uncanny valley. Energy, 7(4):33–35, 1970.
  • Mosteller et al. [1954] Frederick Mosteller, Robert Ray Bush, and Bert Franklin Green. Selected quantitative techniques. Addison-Wesley, 1954.
  • Moundridou and Virvou [2002] Maria Moundridou and Maria Virvou. Evaluating the persona effect of an interface agent in a tutoring system. Journal of computer assisted learning, 18(3):253–261, 2002.
  • Mower et al. [2009] Emily Mower, Maja J Mataric, and Shrikanth Narayanan. Human perception of audio-visual synthetic character emotion expression in the presence of ambiguous and conflicting information. IEEE Transactions on Multimedia, 11(5):843–855, 2009.
  • Mumm and Mutlu [2011] Jonathan Mumm and Bilge Mutlu. Human-robot proxemics: physical and psychological distancing in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction, pages 331–338. ACM, 2011.
  • Murphy [2000] Robin Murphy. Introduction to AI robotics. MIT press, 2000.
  • Mutlu [2011] Bilge Mutlu. Designing embodied cues for dialog with robots. AI Magazine, 32(4):17–30, 2011.
  • Mutlu [2017] Bilge Mutlu. Virtual and physical: Two frames of mind, 2017.
  • Mutlu and Forlizzi [2008] Bilge Mutlu and Jodi Forlizzi. Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction. In Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International Conference on, pages 287–294. IEEE, 2008.
  • Mutlu et al. [2006] Bilge Mutlu, Jodi Forlizzi, and Jessica Hodgins. A storytelling robot: Modeling and evaluation of human-like gaze behavior. In Humanoid robots, 2006 6th IEEE-RAS international conference on, pages 518–523. IEEE, 2006.
  • Mutlu et al. [2009] Bilge Mutlu, Toshiyuki Shiwa, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, pages 61–68. ACM, 2009.
  • Mutlu et al. [2012] Bilge Mutlu, Takayuki Kanda, Jodi Forlizzi, Jessica Hodgins, and Hiroshi Ishiguro. Conversational gaze mechanisms for humanlike robots. ACM Transactions on Interactive Intelligent Systems (TiiS), 1(2):12, 2012.
  • Nash [1951] John Nash. Non-cooperative games. Annals of mathematics, pages 286–295, 1951.
  • Nikolaidis and Shah [2013] Stefanos Nikolaidis and Julie Shah. Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy. In Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction, pages 33–40. IEEE Press, 2013.
  • Nikolaidis et al. [2017] Stefanos Nikolaidis, Swaprava Nath, Ariel D Procaccia, and Siddhartha Srinivasa. Game-theoretic modeling of human adaptation in human-robot collaboration. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pages 323–331. ACM, 2017.
  • Nikolopoulos et al. [2011] Christos Nikolopoulos, Deitra Kuester, Mark Sheehan, Shashwati Ramteke, Aniket Karmarkar, Supriya Thota, Joseph Kearney, Curtis Boirum, Sunnihith Bojedla, and Angela Lee. Robotic agents used to help teach social skills to children with autism: the third generation. In 2011 RO-MAN, pages 253–258. IEEE, 2011.
  • Noldus [1991] L PJJ Noldus. The observer: a software system for collection and analysis of observational data. Behavior Research Methods, 23(3):415–429, 1991.
  • Nomura and Sasa [2009] Tatsuya Nomura and Miyuki Sasa. Investigation of differences on impressions of and behaviors toward real and virtual robots between elder people and university students. In Rehabilitation Robotics, 2009. ICORR 2009. IEEE International Conference on, pages 934–939. IEEE, 2009.
  • Nomura et al. [2006] Tatsuya Nomura, Tomohiro Suzuki, Takayuki Kanda, and Kensuke Kato. Altered attitudes of people toward robots: Investigation through the negative attitudes toward robots scale. In Proc. AAAI-06 Workshop on Human Implications of Human-Robot Interaction, volume 2006, pages 29–35, 2006.
  • Norman [1999] Donald A Norman. Affordance, conventions, and design. interactions, 6(3):38–43, 1999.
  • Nourbakhsh et al. [1999] Illah R Nourbakhsh, Judith Bobenage, Sebastien Grange, Ron Lutz, Roland Meyer, and Alvaro Soto. An affective mobile robot educator with a full-time job. Artificial Intelligence, 114(1-2):95–124, 1999.
  • Olson and Kellogg [2014] Judith S Olson and Wendy A Kellogg. Ways of Knowing in HCI, volume 2. Springer, 2014.
  • Ono et al. [2001] Tetsuo Ono, Michita Imai, and Hiroshi Ishiguro. A model of embodied communications with gestures between humans and robots. In Proceedings of 23rd annual meeting of the cognitive science society, pages 732–737. Citeseer, 2001.
  • Ortega and Gasset [2010] Ortega and José Gasset. Vitalidad, alma, espíritu. Cuerpo vivido, pages 15–52, 2010.
  • Osawa et al. [2006] Fumiaki Osawa, Hiroaki Seki, and Yoshitsugu Kamiya. Clothes folding task by tool-using robot. Journal of Robotics and Mechatronics, 18(5):618, 2006.
  • Osborn [1996] Don R Osborn. Beauty is as beauty does?: Makeup and posture effects on physical attractiveness judgments. Journal of Applied Social Psychology, 26(1):31–51, 1996.
  • Otta et al. [1994] Emma Otta, Beatriz Barcellos Pereira Lira, Nadia Maria Delevati, Otávio Pimentel Cesar, and Carla Salati Guirello Pires. The effect of smiling and of head tilting on person perception. The Journal of psychology, 128(3):323–331, 1994.
  • Pan and Steed [2016] Ye Pan and Anthony Steed. a comparison of avatar-, video-, and robot-mediated interaction on users’ trust in expertise. Frontiers in Robotics and AI, 3:12, 2016.
  • Passman and Weisberg [1975] Richard H Passman and Paul Weisberg. Mothers and blankets as agents for promoting play and exploration by young children in a novel environment: The effects of social and nonsocial attachment objects. Developmental Psychology, 11(2):170, 1975.
  • Pearce et al. [2018] Margaret Pearce, Bilge Mutlu, Julie Shah, and Robert Radwin. Optimizing makespan and ergonomics in integrating collaborative robots into manufacturing processes. IEEE Transactions on Automation Science and Engineering, 2018.
  • Pejsa et al. [2015] Tomislav Pejsa, Sean Andrist, Michael Gleicher, and Bilge Mutlu. Gaze and attention management for embodied conversational agents. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(1):3, 2015.
  • Peltason et al. [2012] Julia Peltason, Nina Riether, Britta Wrede, and Ingo Lütkebohle. Talking with robots about objects: a system-level evaluation in hri. In Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on, pages 479–486. IEEE, 2012.
  • Pereira et al. [2008] André Pereira, Carlos Martinho, Iolanda Leite, and Ana Paiva. icat, the chess player: the influence of embodiment in the enjoyment of a game. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3, pages 1253–1256. International Foundation for Autonomous Agents and Multiagent Systems, 2008.
  • Perlin and Goldberg [1996] Ken Perlin and Athomas Goldberg. Improv: A system for scripting interactive actors in virtual worlds. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 205–216. ACM, 1996.
  • Persson et al. [2001] Per Persson, Jarmo Laaksolahti, and P Lonnqvist. Understanding socially intelligent agents-a multilayered phenomenon. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31(5):349–360, 2001.
  • Pfeifer and Scheier [2001] Rolf Pfeifer and Christian Scheier. Understanding intelligence. MIT press, 2001.
  • Posner et al. [2005] Jonathan Posner, James A Russell, and Bradley S Peterson. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology, 17(3):715–734, 2005.
  • Powers and Kiesler [2006] Aaron Powers and Sara Kiesler. The advisor robot: tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pages 218–225. ACM, 2006.
  • Powers et al. [2007] Aaron Powers, Sara Kiesler, Susan Fussell, and Cristen Torrey. Comparing a computer agent with a humanoid robot. In Human-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on, pages 145–152. IEEE, 2007.
  • Price et al. [2017] Paul C. Price, Rajiv S. Jhangiani, I-Chant A. Chiang, Dana C. Leighton, and Carrie Cuttler. Research Methods in Psychology. Open Textbook Library, 3rd american edition edition, 2017.
  • Quick et al. [1999] Tom Quick, Kerstin Dautenhahn, Chrystopher L Nehaniv, and Graham Roberts. On bots and bacteria: Ontology independent embodiment. In European Conference on Artificial Life, pages 339–343. Springer, 1999.
  • Rani et al. [2004] Pramila Rani, Nilanjan Sarkar, Craig A Smith, and Leslie D Kirby. Anxiety detecting robotic system–towards implicit human-robot collaboration. Robotica, 22(01):85–95, 2004.
  • Rehnmark et al. [2005] Fredrik Rehnmark, William Bluethmann, Joshua Mehling, Robert O Ambrose, Myron Diftler, Mars Chu, and Ryan Necessary. Robonaut: the’short list’of technology hurdles. Computer, 38(1):28–37, 2005.
  • Reilly [1996] W Scott Reilly. Believable social and emotional agents. Technical report, DTIC Document, 1996.
  • Richard et al. [2001] Nadine Richard, Philippe Codognet, and Alain Grumbach. The inviwo toolkit: Describing autonomous virtual agents and avatars. In International Workshop on Intelligent Virtual Agents, pages 195–209. Springer, 2001.
  • Rickenberg and Reeves [2000] Raoul Rickenberg and Byron Reeves. The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 49–56. ACM, 2000.
  • Riegler [2002] Alexander Riegler. When is a cognitive system embodied? Cognitive Systems Research, 3(3):339–348, 2002.
  • Rivin [1987] Eugene I Rivin. Mechanical design of robots. McGraw-Hill, Inc., 1987.
  • Rizzo et al. [2010] Albert Rizzo, JoAnn Difede, Barbara O Rothbaum, Greg Reger, Josh Spitalnick, Judith Cukor, Rob Mclay, et al. Development and early evaluation of the virtual iraq/afghanistan exposure therapy system for combat-related ptsd. Annals of the New York Academy of Sciences, 1208(1):114–125, 2010.
  • Robins et al. [2006] Ben Robins, Kerstin Dautenhahn, and Janek Dubowski. Does appearance matter in the interaction of children with autism with a humanoid robot? Interaction Studies, 7(3):509–542, 2006.
  • Rosch et al. [1991] Eleanor Rosch, Francisco Varela, and Evan Thompson. The embodied mind. Cognitive Science and Human Experience, 1991.
  • Rothbauer et al. [2008] Ulrich Rothbauer, Kourosh Zolghadr, Serge Muyldermans, Aloys Schepers, M Cristina Cardoso, and Heinrich Leonhardt. A versatile nanotrap for biochemical and functional studies with fluorescent fusion proteins. Molecular & Cellular Proteomics, 7(2):282–289, 2008.
  • Ruhland et al. [2015] Kerstin Ruhland, Christopher E Peters, Sean Andrist, Jeremy B Badler, Norman I Badler, Michael Gleicher, Bilge Mutlu, and Rachel McDonnell. A review of eye gaze in virtual agents, social robotics and hci: Behaviour generation, user interaction and perception. In Computer Graphics Forum, volume 34, pages 299–326. Wiley Online Library, 2015.
  • Russell [1996] Daniel W Russell. Ucla loneliness scale (version 3): Reliability, validity, and factor structure. Journal of personality assessment, 66(1):20–40, 1996.
  • Russell et al. [2003] James A Russell, Jo-Anne Bachorowski, and José-Miguel Fernández-Dols. Facial and vocal expressions of emotion. Annual review of psychology, 54(1):329–349, 2003.
  • Saerbeck et al. [2010] Martin Saerbeck, Tom Schut, Christoph Bartneck, and Maddy D Janse. Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1613–1622. ACM, 2010.
  • Saposnik et al. [2011] Gustavo Saposnik, Mindy Levin, Stroke Outcome Research Canada (SORCan) Working Group, et al. Virtual reality in stroke rehabilitation a meta-analysis and implications for clinicians. Stroke, 42(5):1380–1386, 2011.
  • Sato et al. [2009] Wataru Sato, Takanori Kochiyama, Shota Uono, and Sakiko Yoshikawa. Commonalities in the neural mechanisms underlying automatic attentional shifts by gaze, gestures, and symbols. Neuroimage, 45(3):984–992, 2009.
  • Scherer et al. [2012] Stefan Scherer, Stacy Marsella, Giota Stratou, Yuyu Xu, Fabrizio Morbini, Alesia Egan, Albert Rizzo, and Louis-Philippe Morency. Perception markup language: Towards a standardized representation of perceived nonverbal behaviors. In Intelligent virtual agents, pages 455–463. Springer, 2012.
  • Segura et al. [2012] Elena Márquez Segura, Michael Kriegel, Ruth Aylett, Amol Deshmukh, and Henriette Cramer. How do you like me in this: User embodiment preferences for companion agents. In International Conference on Intelligent Virtual Agents, pages 112–125. Springer, 2012.
  • Shahid et al. [2014] Suleman Shahid, Emiel Krahmer, and Marc Swerts. Child–robot interaction across cultures: How does playing a game with a social robot compare to playing a game alone or with a friend? Computers in Human Behavior, 40:86–100, 2014.
  • Shapiro [2010] Lawrence Shapiro. Embodied cognition. Routledge, 2010.
  • Sharkey and Ziemke [2001] Noel E Sharkey and Tom Ziemke. Mechanistic versus phenomenal embodiment: Can robot embodiment lead to strong ai? Cognitive Systems Research, 2(4):251–262, 2001.
  • Shinozaki et al. [2008] Kuniya Shinozaki, Akitsugu Iwatani, and Ryohei Nakatsu. Construction and evaluation of a robot dance system. In New Frontiers for Entertainment Computing, pages 83–94. Springer, 2008.
  • Shinozawa and Yamato [2007] Kazuhiko Shinozawa and Junji Yamato. Effect of Robot and Screen Agent Recommendations on Human Decision-Making. Citeseer, 2007.
  • Shinozawa et al. [2003] Kazuhiko Shinozawa, Byron Reeves, Kevin Wise, Sohye Lim, Heidy Maldonado, and Futoshi Naya. Robots as new media: A cross-cultural examination of social and cognitive responses to robotic and on-screen agents. In Proceedings of Annual Conference of Internation Communication Association, pages 998–1002, 2003.
  • Sidner et al. [2005] Candace L Sidner, Christopher Lee, Cory D Kidd, Neal Lesh, and Charles Rich. Explorations in engagement for humans and robots. Artificial Intelligence, 166(1):140–164, 2005.
  • Simmons et al. [1997] Reid Simmons, Richard Goodwin, Karen Zita Haigh, Sven Koenig, and Joseph O’Sullivan. A layered architecture for office delivery robots. In Proceedings of the first international conference on Autonomous agents, pages 245–252. ACM, 1997.
  • Simmons et al. [2001] Reid Simmons, Sanjiv Singh, David Hershberger, Josue Ramos, and Trey Smith. First results in the coordination of heterogeneous robots for large-scale assembly. In Experimental Robotics VII, pages 323–332. Springer, 2001.
  • Smith and Harrison [2001] Shamus P Smith and Michael D Harrison. Editorial: User centred design and implementation of virtual environments. International journal of human-computer studies, 55(2):109–114, 2001.
  • Snider and Osgood [1969] James G Snider and Charles Egerton Osgood. Semantic differential technique; a sourcebook. Aldine Pub. Co., 1969.
  • Stickdorn et al. [2011] Marc Stickdorn, Jakob Schneider, Kate Andrews, and Adam Lawrence. This is service design thinking: Basics, tools, cases. Wiley Hoboken, NJ, 2011.
  • Strabala et al. [2013] Kyle Wayne Strabala, Min Kyung Lee, Anca Diana Dragan, Jodi Lee Forlizzi, Siddhartha Srinivasa, Maya Cakmak, and Vincenzo Micelli. Towards seamless human-robot handovers. Journal of Human-Robot Interaction, 2(1):112–132, 2013.
  • Strauss and Corbin [1997] Anselm Strauss and Juliet M Corbin. Grounded theory in practice. Sage, 1997.
  • Sweller [1988] John Sweller. Cognitive load during problem solving: Effects on learning. Cognitive science, 12(2):257–285, 1988.
  • Takayama and Pantofaru [2009] Leila Takayama and Caroline Pantofaru. Influences on proxemic behaviors in human-robot interaction. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5495–5502. IEEE, 2009.
  • Takeuchi et al. [2006] Johane Takeuchi, Kazutaka Kushida, Yoshitaka Nishimura, Hiroshi Dohi, Mitsuru Ishizuka, Mikio Nakano, and Hiroshi Tsujino. Comparison of a humanoid robot and an on-screen agent as presenters to audiences. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3964–3969. IEEE, 2006.
  • Tapus et al. [2009] Adriana Tapus, Cristian Tapus, and Maja Mataric. The role of physical embodiment of a therapist robot for individuals with cognitive impairments. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pages 103–107. IEEE, 2009.
  • Taylor [2002] Tina L Taylor. Living digitally: Embodiment in virtual worlds. In The social life of avatars, pages 40–62. Springer, 2002.
  • Tomasello [2010] Michael Tomasello. Origins of human communication. MIT press, 2010.
  • Tosun et al. [2014] Tarik Tosun, Ross Mead, and Robert Stengel. A general method for kinematic retargeting: Adapting poses between humans and robots. In ASME 2014 international mechanical engineering congress and exposition, pages V04AT04A027–V04AT04A027. American Society of Mechanical Engineers, 2014.
  • Traum et al. [2008] David Traum, Stacy C Marsella, Jonathan Gratch, Jina Lee, and Arno Hartholt. Multi-party, multi-issue, multi-strategy negotiation for multi-modal virtual agents. In International Workshop on Intelligent Virtual Agents, pages 117–130. Springer, 2008.
  • Valera et al. [1991] Fransisco J Valera, Evan Thompson, and Eleanor Rosch. The embodied mind. Cognitive Science and Human Experience, 1991.
  • Venkatesh et al. [2003] Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. User acceptance of information technology: Toward a unified view. MIS quarterly, pages 425–478, 2003.
  • Vinayagamoorthy et al. [2004] Vinoba Vinayagamoorthy, Maia Garau, Anthony Steed, and Mel Slater. An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and experience. In Computer Graphics Forum, volume 23, pages 1–11. Wiley Online Library, 2004.
  • Vosinakis and Panayiotopoulos [2001] Spyros Vosinakis and Themis Panayiotopoulos. Simhuman: A platform for real-time virtual agents with planning capabilities. In International Workshop on Intelligent Virtual Agents, pages 210–223. Springer, 2001.
  • Vossen et al. [2009] Suzanne Vossen, Jaap Ham, and Cees Midden. Social influence of a persuasive agent: the role of agent embodiment and evaluative feedback. In Proceedings of the 4th International Conference on Persuasive Technology, page 46. ACM, 2009.
  • Wada and Shibata [2006] Kazuyoshi Wada and Takanori Shibata. Robot therapy in a care house-its sociopsychological and physiological effects on the residents. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., pages 3966–3971. IEEE, 2006.
  • Wagner et al. [2006] Daniel Wagner, Mark Billinghurst, and Dieter Schmalstieg. How real should virtual characters be? In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, page 57. ACM, 2006.
  • Wainer et al. [2006] Joshua Wainer, David J Feil-Seifer, Dylan A Shell, and Maja J Mataric. The role of physical embodiment in human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication, pages 117–122. IEEE, 2006.
  • Wainer et al. [2007] Joshua Wainer, David J Feil-Seifer, Dylan A Shell, and Maja J Mataric. Embodiment and human-robot interaction: A task-based perspective. In RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, pages 872–877. IEEE, 2007.
  • Waldron [1991] Vincent R Waldron. Achieving communication goals in superior-subordinate relationships: The multi-functionality of upward maintenance tactics. Communications Monographs, 58(3):289–306, 1991.
  • Walters et al. [2005] Michael L Walters, Kerstin Dautenhahn, René Te Boekhorst, Kheng Lee Koay, Christina Kaouri, Sarah Woods, Chrystopher Nehaniv, David Lee, and Iain Werry. The influence of subjects’ personality traits on personal spatial zones in a human-robot interaction experiment. In ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005., pages 347–352. IEEE, 2005.
  • Wang et al. [2006] A Ting Wang, Susan S Lee, Marian Sigman, and Mirella Dapretto. Developmental changes in the neural basis of interpreting communicative intent. Social cognitive and affective neuroscience, 1(2):107–121, 2006.
  • Wang and Thorpe [2002] Chieh-Chih Wang and Chuck Thorpe. Simultaneous localization and mapping with detection and tracking of moving objects. In Robotics and Automation, 2002. Proceedings. ICRA’02. IEEE International Conference on, volume 3, pages 2918–2924. IEEE, 2002.
  • Watson et al. [1988] David Watson, Lee Anna Clark, and Auke Tellegen. Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology, 54(6):1063, 1988.
  • Welch et al. [1996] Robert B Welch, Theodore T Blackmon, Andrew Liu, Barbara A Mellers, and Lawrence W Stark. The effects of pictorial realism, delay of visual feedback, and observer interactivity on the subjective sense of presence. Presence: Teleoperators & Virtual Environments, 5(3):263–273, 1996.
  • Werry et al. [2001] Iain Werry, Kerstin Dautenhahn, Bernard Ogden, and William Harwin. Can social interaction skills be taught by a social agent? the role of a robotic mediator in autism therapy. In Cognitive technology: instruments of mind, pages 57–74. Springer, 2001.
  • Williams and Breazeal [2013] Kenton Williams and Cynthia Breazeal. Reducing driver task load and promoting sociability through an affective intelligent driving agent (aida). In IFIP Conference on Human-Computer Interaction, pages 619–626. Springer, 2013.
  • Wilson [2002] Margaret Wilson. Six views of embodied cognition. Psychonomic bulletin & review, 9(4):625–636, 2002.
  • Wilson and Foglia [2011] Robert A Wilson and Lucia Foglia. Embodied cognition. 2011.
  • Wrobel et al. [2013] Jérémy Wrobel, Ya-Huei Wu, Hélène Kerhervé, Laila Kamali, Anne-Sophie Rigaud, Céline Jost, Brigitte Le Pévédic, and Dominique Duhaut. Effect of agent embodiment on the elder user enjoyment of a game. In ACHI 2013-The Sixth International Conference on Advances in Computer-Human Interactions, 2013.
  • Wyrobek et al. [2008] Keenan A Wyrobek, Eric H Berger, HF Machiel Van der Loos, and J Kenneth Salisbury. Towards a personal robotics development platform: Rationale and design of an intrinsically safe personal robot. In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pages 2165–2170. IEEE, 2008.
  • Yohanan and MacLean [2012] Steve Yohanan and Karon E MacLean. The role of affective touch in human-robot interaction: Human intent and expectations in touching the haptic creature. International Journal of Social Robotics, 4(2):163–180, 2012.
  • Ziemke [1999] Tom Ziemke. Rethinking grounding. In Understanding representation in the cognitive sciences, pages 177–190. Springer, 1999.
  • Ziemke [2003] Tom Ziemke. What’s that thing called embodiment. In Proceedings of the 25th Annual meeting of the Cognitive Science Society, pages 1305–1310. Citeseer, 2003.
  • Zlatev [1997] Jordan Zlatev. Situated embodiment studies in the emergence of spatial meaning. 1997.
  • Zlotowski [2010] Jakub Zlotowski. Comparison of robots’ and embodied conversational agents’ impact on users’ performance. 2010.