A Robot by Any Other Frame: Framing and Behaviour Influence Mind Perception in Virtual but not Real-World Environments

04/16/2020 ∙ by Sebastian Wallkotter, et al. ∙ Uppsala universitet 0

Mind perception in robots has been an understudied construct in human-robot interaction (HRI) compared to similar concepts such as anthropomorphism and the intentional stance. In a series of three experiments, we identify two factors that could potentially influence mind perception and moral concern in robots: how the robot is introduced (framing), and how the robot acts (social behaviour). In the first two online experiments, we show that both framing and behaviour independently influence participants' mind perception. However, when we combined both variables in the following real-world experiment, these effects failed to replicate. We hence identify a third factor post-hoc: the online versus real-world nature of the interactions. After analysing potential confounds, we tentatively suggest that mind perception is harder to influence in real-world experiments, as manipulations are harder to isolate compared to virtual experiments, which only provide a slice of the interaction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. A NAO robot collaborating with a user to solve the Tower of Hanoi.

Within the human-robot-interaction (HRI) community there is growing acceptance of the idea that humans treat robots as social agents (Graaf, 2019), apply social norms to robots (Vanman and Kappas, 2019), and, in some circumstances, treat robots as moral agents (Malle et al., 2016). All of these are related to the concept of mind perception, or how much agency a robot is seen to have (Gray et al., 2007; Abubshait and Wiese, 2017). However, whilst the attribution of a mind to a robot may at times be desirable (Wiese et al., 2017), a mismatch between a robot’s perceived mind and actual mind, i.e., its true capabilities, could prove detrimental for a successful interaction (Wiese et al., 2017; Koda et al., 2016; Fink, 2012).

Investigating which factors actually influence mind perception in robots is, therefore, at the core of our research. We study two possible contributing factors: framing (how the robot is introduced), and social behaviour (e.g., speech and non-verbal cues provided by the robot). Work on anthropomorphism and mind perception in robots has largely focused on embodiment and appearance, with less emphasis on social behaviours (Ghazali et al., 2019; Bartneck et al., 2009a; Fink, 2012). Similarly, although the effect of framing on mind perception has been studied with virtual agents (Caruana et al., 2017; Wiese et al., 2012; Wykowska et al., 2014), this has yet to be replicated with embodied robots. As such, the primary goal of this research is to investigate the interaction between framing and social behaviours on mind perception of a social robot.

Further, given the recent surge of interest in ethical robotics111See for example: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai and explainable AI (Anjomshoae et al., 2019), it is also timely to ask to what extent mind perception contributes to this debate. Considering that mind perception has been linked to moral concern in robots (Nomura et al., 2019b), we go beyond just looking at mind perception, and investigate if the degree of moral concern attributed to robots (Nomura et al., 2019a) changes according to framing and behaviour.

We first conducted two experiments on Amazon Mechanical Turk (AMT) validating a framing manipulation (experiment 1), using an image of the robot and descriptive text, and a social behaviour manipulation (experiment 2), using a video of the robot’s behaviour. Then, we conducted an experimental study in the real world to investigate the interaction between these factors. We hypothesise that robots which are framed with higher mind perception and present social behaviours will cause higher mind perception and be afforded more moral standing than low-mind frame, non-social robots. We also hypothesise that when the frame and behaviour are in conflict (e.g., high-mind frame, non-social behaviour) the behaviour of the robot will be more influential in determining mind perception and moral standing. Additionally, we predict a significant correlation between mind perception, anthropomorphism, and moral concern.

After looking at the results, we found a surprising lack of replication between the first two experiments, and the third. We, hence, discuss potential confounds, and, as our second contribution, identify the nature of the interaction (real-world versus virtual) as the primary source for the lack of replication.

2. Related Work

The HRI community has used various terminologies and constructs to refer to the attribution of human-like properties to robotic agents. Anthropomorphism is one of the most widely used terms, and refers to the general psychological phenomenon of attributing perceived human-like properties to non-human agents (Epley et al., 2007; Złotowski et al., 2015; Fink, 2012). A related concept is the intentional stance, which focuses on explaining others’ behaviour with reference to their mental states (Schellen and Wykowska, 2019; Marchesi et al., 2019). Finally, mind perception, as the focus of this paper, is related to the intentional stance (Wiese et al., 2017; Schellen and Wykowska, 2019) and refers to the extent to which agents are seen as possessing a mind and agency (Gray et al., 2007).

There is a general consensus that the more human-like the robot appears to be, the greater the degree of anthropomorphism (Eyssel et al., 2012; Bartneck et al., 2009a; Broadbent et al., 2013). Several measures have been developed which target anthropomorphism, with one of the most popular being the Godspeed questionnaire (Bartneck et al., 2009b). As such, we find it also worthwhile to investigate how anthropomorphism relates to mind perception.

The idea of mind perception was first introduced by Gray et al. (2007). In their seminal study, they compared different agents (including a robot) on a variety of capabilities, such as pain, desire, or the ability to have goals. They found that mind perception could be divided into two dimensions: agency (the perceived ability to do things), and experience (the perceived ability to feel things). However, whilst a robot (Kismet 222http://www.ai.mit.edu/projects/sociable/baby-bits.html) was one of the agents used in their initial study, many different kinds of robots exist (Phillips et al., 2018), and there are still open questions as to how different properties of these robots affect ratings of mind perception.

Social behaviour is one such factor that could potentially influence mind perception. In the context of HRI, robots may be equipped to display a number of human-like behaviours, such as gaze following, joint attention, back-channelling, personalisation, feedback, and verbal and non-verbal cues (Mutlu et al., 2006; Jung et al., 2013; Wills et al., 2016; Corrigan et al., 2013; Ahmad et al., 2019; Breazeal et al., 2005; Wigdor et al., 2016). Robot behaviour can influence perceptions of machine or human-likeness (Park et al., 2011; Fink, 2012). Abubshait and Wiese (2017) also investigate the effect of behaviour on mind perception in virtual agents; however, focus on the reliability of the robot as a social behaviour. More complex social behaviours such as cheating (Short et al., 2010) or making mistakes (Salem et al., 2013; Mirnig et al., 2017)

have otherwise been studied in the context of perceived human-likeness, with mixed findings. As such, given the complex interplay between robot errors, human performance, and mind perception, a baseline understanding of how social behaviours in robots influences mind perception is still needed. As there can be more variance in the behaviour of a given autonomous social robot than in its appearance, behaviour may be of equal or greater importance

(Wiese et al., 2017). Therefore, we aim to address this gap, and investigate if and how behaviour can affect the perception of a mind in a social robot.

A second factor which could influence mind perception in social robots is how the robot is framed. Framing refers to the prior information a person has about the robot, such as prior expectations, beliefs, or knowledge (Kwon et al., 2016). Within HRI, there is already some research which investigates how framing a robot prior to an interaction influences participants’ subsequent judgements and behaviour (Westlund et al., 2016; Stenzel et al., 2012; Groom et al., 2011; Rea and Young, 2018, 2019; Thellman and Ziemke, 2017). For virtual agents, several studies have investigated how framing an agent as being controlled by a human rather than a computer leads to greater attributions of mind (Caruana et al., 2017; Wiese et al., 2012; Wykowska et al., 2014). However, in addition to most of this research being conducted with virtual agents, rather than robots, these studies focus on primarily neuropsychological measures of mind perception, without validation of participants subjective experiences (but see Caruana et al. (2017) for an exception). As such, no research has yet directly investigated the effect of framing on mind perception in robots. Hence, we investigate how two different frames, high and low mind, affect participant’s mind perception in HRI.

A secondary element of the Gray et al. (2007) study involves the relation of mind perception to moral concern. They showed a link between the sub-dimensions of mind perception and perceiving an entity as moral agent, or moral patient respectively (Gray et al., 2007). Additionally, robots with differing embodiment and behaviour may be afforded different levels of moral concern (Nomura et al., 2019b; Malle et al., 2016). Further, framing can influence the expansion or reduction of people’s moral inclusiveness (Laham, 2009); however, this link has yet to be explored in HRI. In light of this discussion, we chose to also include the recently proposed moral concern for robots scale (Nomura et al., 2019a) to investigate the relationship between mind perception and moral concern for robots.

3. Technical Setup and Scenario

Figure 2. A graphic visualisation of the pipeline NAO uses.

For this research we programmed a Softbank Robotics NAOv5 robot (naoqi v2.1.4.13) to autonomously play the Tower of Hanoi with users. We modified the original Tower of Hanoi puzzle into a two player game by having the human and robot take turns in making a move. We chose to construct the interaction around the Tower of Hanoi, as it has been used previously to study aspects of HRI such as embodiment or (gaze) behaviour (Tsiakas et al., 2017; Hoffmann and Krämer, 2011; Andrist et al., 2015). The interaction is long enough to expose participants to the full range of robot behaviours, solving it requires definitive interaction with the robot, and, at the same time, is not too cognitively demanding to distract from the robot.

Following the taxonomy outlined by Beer, et. al. (Beer et al., 2014)

, we have classified the robot as fully autonomous, drawing the action boundary at giving the user specific instructions what to do. While it was technically impossible for NAO to pick up the disks (they are too large) and modify the game by itself, NAO instructed participants to carry out its moves.

The technical setup is summarized in figure 2. NAO used a video stream from its head camera, which we manually configured, to identify the game state. To ensure consistent recognition of the disks, we fixed the tower to the table and added a custom posture to NAOs posture library, to which it returned whenever it assessed the game state. We also added a light source behind NAO to ensure sufficient exposure and increase robustness towards changes in ambient light. To detect the presence of a disk on one of the poles, NAO used color thresholding of the disk’s colors in three regions of interest, one for each pole333Source code available at: https://github.com/usr-lab/towerofhanoi .

From here, NAO used breadth first tree search with pruning of previously visited states to compute the optimal sequence of moves, solving the game in the least number of turns. This method is run for both the robot’s and the user’s turns, allowing NAO to give feedback on the user’s move and display appropriate behaviours.

Behaviour Social Non-Social
Personalization Uses ’we’ Uses third person
Feedback Gives positive or negative feedback States if the action was optimal
Verbal Phrases Variety of phrases (3 or more variations) Only one phrase per trigger
Gestures Pointing, Waving, Nodding, Thinking Pointing
Gaze Switches gaze between participant and game Only looks at game
Memory Says that it really enjoys this game States that the user is the 27th person to play the game
Feedback Frequency Randomly with 33% on optimal move, always on sub-optimal moves For every sub-optimal move, and every 5th move.
Table 1. Differences in the Robot’s Social Behaviours

To create the social behaviours, we reviewed the HRI literature identifying commonly used behaviours. Some of the most frequent examples include gaze cues, turn-taking, non-verbal gestures, personalisation, feedback, and memory (Mutlu et al., 2006; Kose-Bagci et al., 2008; Breazeal et al., 2005; Corrigan et al., 2013; Ahmad et al., 2019; Fischer et al., 2013; Irfan et al., 2019). Based on these, we developed one version of the robots behaviour depicting a ’social’ robot, and one depicting a ’non-social’ robot, see table 1. NAO launches these behaviours at specified trigger points based on comparisons between the observed and expected game state. We created a set of custom animations: wave, think (scratching it’s head), nod, point left, point middle, and point right.

We then created a custom module for NAO which would run the game and added it to the naoqi framework via a custom broker. The module was run on a separate computer accessible via the network, and all the processing (CV, and AI) was done there.

4. Experiment 1 (Framing Validation)

4.1. Hypothesis

There will be a significant effect of framing, such that participants who read the high mind frame will have higher attributions of mind perception than those who read the low mind frame.

4.2. Participants

We recruited participants from the online platform Amazon Mechanical Turk (AMT). To ensure sufficient English proficiency, we only recruited participants from countries with English as the official language (US, Canada, UK, Australia, New Zealand). Further, to ensure high response quality, we limited participation to workers with a approval rating and introduced attention checks prior to and during the experiment.

Out of the participants, were discarded due to failing attention checks. The remaining participants (, ) were randomly assigned into two conditions: high-mind (, ), and low-mind (, ). There were no significant differences in age (, ) or gender (, ) between the groups.

The survey took approximately 5 minutes to complete, and participants were compensated USD for their time.

4.3. Material

To control for participant’s conceptions of robots when filling out the questionnaire, we showed them a picture of a NAO robot444https://www.softbankrobotics.com/emea/en/nao. This image was then combined with a text description about the robot’s capabilities.

To measure mind perception, we used the dimensions of mind perception questionnaire introduced by Gray et al. (2007). However, we modified the initial scale by replacing the original 7-point Likert scale with semantic anchors with a 5-point Likert scale ranging from not capable at all to extremely capable. This allowed us to investigate mind perception for individual robots, rather than having to compare two robots.

The image, description, and questionnaire were shown on a single page and provided via the participant’s browser using a survey constructed with UniPark survey software.

4.4. Design and Procedure

We employed a 1-way independent groups design. Ethics approval was obtained from the Bremen International Graduate School of Social Sciences (BIGSSS). Participants were first shown an information sheet which detailed the procedure and informed them of their right to withdraw at any time, after which they indicated their consent to participate. Next, participants were shown a picture of a NAO robot and asked four attention check questions about the image. If any of these questions were answered incorrectly, the survey would end without presenting the manipulation or measurement to the participant. If participants answered all attention check questions correctly, they were randomly assigned to one of the two levels of framing.

Participants were then presented with the same picture of the robot, but now accompanied with one of the two frames (high/low mind) manipulated as the independent variable555The exact manipulations are available in the supplementary material.. We designed both descriptions to convey the same factual information. The dependent variable was the modified version of the mind perception questionnaire (Gray et al., 2007). Participants were instructed to look at the picture, read the text below, and then fill out the mind perception questionnaire. The order of scale items was randomized for each participant.

Next, we added a final attention check, asking about the role of the robot as described by the frame (both frames had the same role of the robot as a teacher). This was done to ensure that participants had thoroughly read the description. Afterwards, participants were asked to provide demographics.

Finally, participants were taken to a debriefing statement informing them about the two conditions of the study, the aim, and the contact details of the experimenters should they have any questions about the study.

4.5. Results and Discussion

After collecting the data, reliability of the overall scale was computed. We excluded participants that were missing more than 20% of their data. The reliability of the overall item scale, mind perception, was ( valid cases), indicating the questionnaire is highly reliable.

Condition Mean SD N M F
mind perception low-mind 2.14 0.8 39 24 14
high-mind 2.79 0.91 39 24 15
total 2.46 .91 78 48 29
Table 2. Means and SDs Experiment 1

We then conducted an independent samples t-test to compare mind perception in the high and low mind framing conditions. The mean and standard deviations for each group are reported in table

2. There was a significant difference in mind perception between participants who viewed the high-mind frame, and those who viewed the low-mind frame, , , (see figure 2(a)). This indicates the framing manipulation was successful in influencing participants mind perception attributed to the robot, with a moderate to high effect size.

5. Experiment 2 (Behaviour Validation)

5.1. Hypothesis

There will be a significant effect of behaviour, such that participants who see the video of the social robot will have higher attributions of mind perception than those who see the video of the non-social robot.

5.2. Participants

Participants were again recruited from AMT, with the same participation criteria as in Experiment 1 (see Section 4.2). participants completed the survey, of which failed one of the attention checks, leaving (, ) eligible participants. These participants were then randomly assigned to one of the two conditions; viewed the social robot videos (), and viewed the non-social videos (). There was no difference between gender () or age () across conditions.

The survey took approximately minutes to complete, and participants were compensated USD for their time.

5.3. Material

For this experiment, we used videos showing NAO introducing the Tower of Hanoi, and then playing it with the experimenter. Each condition (social/non-social) had two videos. The first video of the set showed NAO introducing itself and explaining the rules of the game. The second video showed three scenes from a game played between NAO and the experimenter666Videos are available in the supplementary material.. The videos were filmed from the perspective a participant may have during the real-world experiment, and the scenes were designed to show all the differences present in the two conditions. The experimenter was not seen, aside from their hand.

Mind perception was again measured using the modified version of the mind perception questionnaire by Gray et al. (2007). We used the Limesurvey survey tool to host the survey.

5.4. Design and Procedure

Ethics approval for this study was obtained from the Jacobs University Ethics Committee. We used a similar design as in our first experiment, but replaced the initial image participants saw when answering the attention check questions with the first video from the set of their assigned condition (duration 40s). We also replaced the following image and description of the robot with the second video of the set (duration ).

Upon entering the survey, participants were randomly assigned to either the social or non social behaviour condition as the independent variable. From then on, the procedure was the same as in Experiment 1, see section 4.4.

To ensure that we did not induce any false expectations about the robot during the experiment, the debriefing statement contained an additional paragraph indicating that the robot’s agency may appear different from its true capabilities. We also included a link to Softbank Robotics’s website, should participants wish to obtain more information.

5.5. Results and Discussion

Reliability of the mind perception score ( items, ) was high. We again excluded participants if more than of the responses were missing.

(a) framing
(b) behaviour
Figure 3. The distribution of mind perception scores in experiment 1 and experiment 2. Mind perception differs significantly () between levels for both framing (left) and behaviour (right).

We performed an independent samples t-test on mind perception (, , ), comparing the social and non-social behaviours; the results are shown in figure 2(b). This suggests the social behaviours were successful in increasing participants attributions of mind perception, with a large effect size.

behaviour Mean SD N M F
MindPerception non-social 1.79 .68 28 14 14
social 2.39 .76 34 14 19
total 2.12 .78 62 28 33
Table 3. Results Experiment 2

6. Experiment 3 (Interaction)

6.1. Hypotheses

(H1) There will be a significant effect of framing, such that participants who read the high mind frame will have higher attributions of mind perception and moral concern than those who read the low mind frame. (H2) There will be a significant effect of behaviour, such that participants who interact with the social robot will have higher attributions of mind perception and moral concern than those who interact with the non-social robot. (H3) There will be a significant framing by social behaviours interaction, where behaviour will have a stronger effect than framing on mind perception and moral concern. That is, the difference between social and non-social behaviours for the high mind frame will be lower than the difference between social and non-social behaviours for the low mind frame.

6.2. Participants

We recruited students from Uppsala University via advertisements on notice boards across the entire university, with particular focus on the buildings for the humanities, medicine, and psychology. This was done to attract more people with a non-technical background. We also advertised the experiment in lectures, and recruited participants directly by approaching them and inviting them to participate.

participants (, ) completed the experiment. Participants were randomly assigned to one of conditions: (1) low-mind, non-social (), (2) low-mind, social (), (3) high-mind, non-social (), (4) high-mind, social (). participant had to be excluded due to technical difficulties with the robot (faulty CV pipeline), leaving eligible participants. Each condition had participants, with exception of the high-mind, non-social condition, which had .

The experiment took approximately minutes to complete, and participants were compensated with a voucher worth approximately USD for their time.

6.3. Design

Ethics approval was obtained from the Jacobs University Ethics Committee. We employed a 2-way full factorial, independent groups design. Our independent variables were framing (high/low mind, see section 4), and robot behaviour (social/non-social, see section 5). The number of participants needed to detect an effect with 95% power and , as recommended by Cohen (1992), was calculated a priori using G*Power Software (Faul et al., 2007)

. Although moderate-large effect sizes were found in Experiments 1 and 2, we chose to be conservative with a moderate effect size estimation (

= 0.15; (Cohen, 1992)). Following these par ameters, the number of participants required to detect a medium effect size is N = 73. Given our sample size of , our analyses should be sufficiently powered to detect a medium to large effect.

Our dependent variables were mind perception (Gray et al., 2007), moral concern (Nomura et al., 2019a), as well the Godspeed questionnaire (Bartneck et al., 2009b). Participants also answered 3 questions regarding their familiarity with robots in general, familiarity with the NAO robot specifically, and openness to new technologies.

6.4. Material

For this experiment, participants interacted with the NAO robot directly, using the Tower of Hanoi as an interaction task. NAO was programmed as described in section 3. Additionally, we provided a laptop that participants used to fill in the questionnaires before and after the interaction. If additional consent was given, the entire experiment was video recorded from two angles, a front angle, and a side angle.

To measure mind perception, we again used the adapted mind perception questionnaire by Gray et al. (2007). To measure the amount of moral concern participants felt towards NAO, we used a modified version of the Measurement of Moral Concern for Robots scale (Nomura et al., 2019a). We reformulated each item to start with the phrase ”I would”, and reverse coded the item, if the original started with ”I wouldn’t”. The final scale had 30 items, out of which 12 were reverse coded.

Finally, all dimensions of the Godspeed questionnaire (Bartneck et al., 2009a) were given to participants to measure anthropomorphism of the robot.

6.5. Procedure

After entering the lab, participants were asked to sit in front of the Tower of Hanoi opposite of NAO. Then, participants were asked to read and sign the prepared information sheet and consent form. This included the optional decision to have the interaction video recorded for later analysis. Afterwards, participants were directed to a laptop, next to the tower, where they filled in a demographic questionnaire and answered questions about their familiarity with robots, NAO, and openness to new technology. Following this, they were presented with the framed description of the robot (high/low mind). This description was shown automatically after the demographics questionnaire was completed. After reading the description, participants were prompted to stop filling out the questionnaire and asked to inform the experimenter, so that they could start the game. Participants then played the game with NAO showing one of the two sets of behaviour (non-social/social). Once the game was completed, participants were asked to continue filling out the questionnaire, which presented the three measures in random order. Questions within each measure were also randomized. After completing the survey, participants were shown a debriefing statement, thanked for their participation in the study, and given the voucher as compensation for their time.

6.6. Results and Discussion

(a) Mind Perception
(b) Moral Concern
(c) Godspeed
Figure 4. The main result of experiment 3. None of the measures differ significantly between either framing, or behaviour. While there is no significant interaction, a trend is visible in the data for the non-social condition and different frames on mind perception.

Before analyzing the effect of our conditions, we verified the reliability of the measures. Mind Perception (, ), Moral Concern (, ) and Godspeed (, ) were all found to be highly reliable. The reliability of the perceived safety scale of the Godspeed questionnaire was low (, ). We therefore excluded this sub-scale from our final analyses.

We then proceeded to check for an effect of three potential covariates: age, gender, and technical background. The latter was measured using the three questions outlined in the procedure’s pre-test, which we tested independently. First, we checked for correlation between the potential covariates and the outcome measures. The only significant correlation was between gender and moral concern (, ), indicating that women showed greater moral concern for the robot than men.

During this analysis we also noted significant correlations between the dependent variables. There was a significant correlation between mind perception and moral concern (, ), mind perception and Godspeed (, ), and Godspeed and moral concern (, ). These findings support our hypotheses that these three constructs are related, and further suggests using a MANOVA to account for dependencies between these variables rather than separate ANOVAS.

Afterwards, we tested for an effect of gender between conditions using a -test; we found no significant difference (, ), which is unsurprising given our random assignment of participants to conditions. As correlation is a necessary condition for determining confounds, mediation, or moderation, we concluded that none of the measured potential confounds affected our experiment.

behavior framing Mean SD N M F
Mind Perception non-social low-mind 2.15 .89 25 17 8
high-mind 2.48 .64 24 16 8
total 2.31 .79 49 33 16
2-8 social low-mind 2.47 .86 25 14 11
high-mind 2.54 .73 25 14 11
total 2.50 .79 50 28 22
2-8 total low-mind 2.31 .88 50 31 19
high-mind 2.50 .68 49 30 19
total 2.40 .79 99 61 38
Moral Concern non-social low-mind 5.17 .84 25 17 8
high-mind 5.18 .79 24 16 8
total 5.17 .81 49 33 16
2-8 social low-mind 5.07 .84 25 14 11
high-mind 5.09 .56 25 14 11
total 5.08 .60 50 28 22
2-8 total low-mind 5.12 .74 49 31 19
high-mind 5.13 .68 50 30 19
total 5.13 .71 99 61 38
Godspeed non-social low-mind 3.53 .85 25 17 8
high-mind 3.8 .57 24 16 8
total 3.67 .73 49 33 16
2-8 social low-mind 3.63 .61 25 14 11
high-mind 3.6 .56 25 14 11
total 3.61 .58 50 28 22
2-8 total low-mind 3.58 .73 49 31 19
high-mind 3.70 .57 50 30 19
total 3.64 .66 99 61 38
Table 4. Results Experiment 3

Following this, we ran a MANOVA using framing and behaviour as independent variables, and our three measures (Mind Perception, Moral Concern, Godspeed) as dependent variables. Contrary to our hypotheses, we found no significant effect of either (H1) framing (, , Wilkin’s ), nor (H2) behaviour (, , Wilkin’s ). The (H3) interaction between framing and behaviour was also non-significant (, , Wilkin’s ). The group means are denoted in table 4 and visualisations for the distribution for mind perception and moral concern can be found in figure 4.

7. General Discussion

The most unexpected finding of our research is that the manipulations tested in the online experiments did not replicate in the real world. The non-significant findings on any of the three scales (mind perception, moral concern, and anthropomorphism) also means that we were unable to further analyse the relationship between these constructs. The lack of replication also suggests that there is at least one other factor at play that is confounding the manipulations. There are two main potential sources of difference in this experiment: population and experimental setting. As the population factors age, gender, and technical background were assessed as potential confounds during the experiment, we can rule these out as causes; this makes the experimental setting - online vs. real-world - the more likely cause.

The first main source of difference between the three experiments is the population tested. The population in the first two experiments was recruited from AMT; the population in the third experiment consisted of European university students. As stated above, we can rule out age, gender and technological background, due to non-significant correlation with either the independent or dependent variables. The exception here is moral concern, which differs significantly by gender, but doesn’t differ significantly between conditions. Another possible confound between the two could be cultural background; however, this seems unlikely due to the large diversity of both the AMT population and the university population. In addition, the original mind perception sub-scales have been replicated cross culturally in a Japanese sample, suggesting that mind perception is a culturally generalizable phenomenon (Ishii and Watanabe, 2019). As such, we think that it is unlikely - though not impossible - that differences in population are causing the difference in effect size between experiments.

The second source of difference between the experiments comes from the environment. For the first two experiments the robot was embodied virtually (picture and video), whereas the final experiment involved physical embodiment. Additionally, the role of participants differed between experiments. Whilst the first two experiments were online, and therefore involved the participants only as observers, in the final experiment they were able to interact with the robot directly. The duration of the experiment was also extended from around 5 minutes in the first two experiments to 25 minutes in the third. The third experiment was performed in a research lab, whereas the first two were carried out at a location of the participant’s choosing.

One consequence of moving from a virtual embodiment with participants as observers to a real-world interaction with a physically embodied robot could be change in participant’s attention. While the focus of experiment 1 and experiment 2 was clearly on the robot, participants in experiment 3 had to split their attention between the robot and the game. Potentially, the focus on the task may have distracted from the robot’s behaviours, blurring out the differences between conditions due to lower engagement with the robot. However, in this case we would still expect a significant effect of framing - as frames were presented before the game could become a distractor -, and we would expect that participants that engage more with the robot show a larger effect on the measures. We can use time spent playing the game as a proxy measure for engagement. However, we do not find any significant correlation () between completion time and participant’s ratings on mind perception, making lack of focus on the robot an unlikely explanation.

A second possibility is that real-world interaction is a lot richer than merely seeing a picture or video. Participants may have more sources from which they draw when assessing the extent of the robot’s mind in the real-world interaction, making effects harder to isolate. This would also explain the lack of replication for the framing manipulation, as other factors present during the interaction could outweigh the effect of the frame. In addition, factors like in-group affiliation, which aren’t present in virtual scenarios due to the nature of the participant being an observer, may be present in real-world interactions. More factors at play would mean that the effect size could be lower in the real-world. If we add the type of experiment (virtual / real-world) as another variable, we can test the assumption that real-world interaction leads to lower effect size. This decrease would be visible as an interaction between type of experiment and manipulation. That is, we would expect the effect of framing and behaviour on mind perception to be significantly less in the real-world experiment than in the two pilots, where only an image and video were used. Hence, we combined our data into one large dataset, possible due to the non-significant interaction in experiment 3, and performed two 2-way ANOVAs comparing both framing and behaviour across the type of experiment (online with picture/video versus real-world interaction).

(a) framing
(b) behaviour
Figure 5. A contrast between the virtual experiment and the real-world interaction on the mean score of mind perception split by manipulations. We can see a strong indication for an interaction between experiment type and manipulation.

First, we conducted an ANOVA testing the interaction between the type of experiment (virtual / physical) and framing (high/low mind), () see figure 5. We found a non-significant interaction between the experimental conditions and framing (), a significant main effect of framing (), likely driven by the effect of Experiment 1, and a non-significant main effect of the experimental condition (). However, if the effect size in the real-world interaction is indeed small, more power may be required (suggested ), making this (post-hoc) analysis difficult to interpret. Consequently, we recommend a follow-up study with higher power more specifically targeted towards comparing the virtual and real-world setting in the context of framing.

Next, we conducted a second ANOVA comparing the type of experiment (virtual / physical) and behaviour (social / non-social), (), see figure 5. The ANOVA showed a non-significant interaction between experimental conditions and behaviour (), a significant main effect of behaviour (

), again probably driven by Experiment 2, and a significant main effect of the experimental condition (

). The main effect of the type of experiment could suggest that ratings of mind perception are higher for physically present robots than robots depicted in videos; again, however, a more rigorous follow-up experiment would be needed to determine this with certainty.

Hence, a tentative explanation for what caused the reduction in effect size is that there may be more factors at play (such as intergroup dynamics) in the real-world setting, than were in either online experiment. This explanation also aligns nicely with previous work comparing virtual and physical embodiment. While some studies suggest that physically embodied robots lead to higher social presence (Jung et al., 2013), elicit higher ratings of empathy (Seo et al., 2015), and lead to greater engagement and enjoyment of the interaction (Deng et al., 2019), other studies contradict these findings (Schneider and Kummert, 2017; Ligthart and Truong, 2015). In our experiment, we can equally see that physical embodiment leads to a consistent, high attribution of mind across conditions; however, the effect size of framing and behaviour is larger in the virtual pilots.

An appealing explanation for this is that human’s perception of other minds is the result of interference between different sources of truth. In a physical interaction these factors are, as mentioned above, harder to isolate. As we can only measure the resulting, inferred mean, the effect size of an individual factor will be reduced. In the virtual setting, however, the manipulation is more isolated, meaning less factors contribute overall, and variance in the manipulation is easier to detect.

Should interference from other, non-measured factors be the main cause of our findings, this could also explain some of the other contradictory findings in research on physical and virtual embodiment. However, and this is a clear limitation of our work, we can only tentatively suggest this explanation, as we set out to test a different hypothesis and derived this explanation post-mortem. Hence, we highly encourage more research testing both, which factors contribute to outcomes like mind perception, and what differences exist between virtual and physical interactions. Ideally, specific factors which could potentially contribute to mind-perception in-vivo would need to be identified and manipulated individually. Additionally, we think that a meta-analysis on the effect of robot behaviours could help investigate if our theory can explain some of the conflicting findings between previous studies.

8. Conclusion

In summary, this paper shows evidence that mind perception is harder to manipulate in physical experiments than in virtual ones. Both experiment 1 and experiment 2 showed significant effects of framing and social behaviour on mind perception, respectively. However, these effects failed to replicate in a real-world setting. We tentatively suggest that this is caused by virtual interactions being more isolated, i.e., only providing a slice of the real interaction. We hypothesize that this could explain some of the contradictory findings in experiments between virtual and physical embodiment, although further research is needed to claim this with certainty.

Acknowledgements.
Special thanks to Tatsuya Nomura for providing a translated version of the Measurement of Moral Concern for Robots scale, and to the ANIMATAS project’s independent ethics advisor Dr. Agnès Roby-Brami, for providing additional thoughts on the experimental design. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 765955.

References

  • (1)
  • Abubshait and Wiese (2017) Abdulaziz Abubshait and Eva Wiese. 2017. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction. Frontiers in psychology 8, August (2017), 1–12. https://doi.org/10.3389/fpsyg.2017.01393
  • Ahmad et al. (2019) Muneeb Imtiaz Ahmad, Omar Mubin, Suleman Shahid, and Joanne Orlando. 2019. Robot’s adaptive emotional feedback sustains children’s social engagement and promotes their vocabulary learning: a long-term child-robot interaction study. Adaptive Behavior 27, 4 (2019), 243–266. https://doi.org/10.1177/1059712319844182
  • Andrist et al. (2015) Sean Andrist, Bilge Mutlu, and Adriana Tapus. 2015. Look like me: Matching robot personality via gaze to increase motivation. In Conference on Human Factors in Computing Systems - Proceedings, Vol. 2015-April. Association for Computing Machinery, 3603–3612. https://doi.org/10.1145/2702123.2702592
  • Anjomshoae et al. (2019) Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1078–1088.
  • Bartneck et al. (2009a) Christoph Bartneck, Takayuki Kanda, Omar Mubin, and Abdullah Al Mahmud. 2009a. Does the Design of a Robot Influence Its Animacy and Perceived Intelligence? International Journal of Social Robotics 1, 2 (apr 2009), 195–204. https://doi.org/10.1007/s12369-009-0013-7
  • Bartneck et al. (2009b) Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009b. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1, 1 (2009), 71–81. https://doi.org/10.1007/s12369-008-0001-3
  • Beer et al. (2014) Jenay M Beer, Arthur D Fisk, and Wendy A Rogers. 2014. Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction. Journal of Human-Robot Interaction 3, 2 (jun 2014), 74. https://doi.org/10.5898/jhri.3.2.beer
  • Breazeal et al. (2005) Cynthia Breazeal, Cory D. Kidd, Andrea L. Thomaz, Guy Hoffman, and Matt Berlin. 2005. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. 383–388. https://doi.org/10.1109/IROS.2005.1545011
  • Broadbent et al. (2013) Elizabeth Broadbent, Vinayak Kumar, Xingyan Li, John Sollers, Rebecca Q. Stafford, Bruce A. MacDonald, and Daniel M. Wegner. 2013. Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality. PLoS ONE 8, 8 (aug 2013), e72589. https://doi.org/10.1371/journal.pone.0072589
  • Caruana et al. (2017) Nathan Caruana, Dean Spirou, and Jon Brock. 2017. Human agency beliefs influence behaviour during virtual social interactions. In PeerJ.
  • Cohen (1992) Jacob Cohen. 1992. Quantitative methods in psychology. Nature 141, 3570 (1992), 613. https://doi.org/10.1038/141613a0
  • Corrigan et al. (2013) Lee J. Corrigan, Christopher Peters, and Ginevra Castellano. 2013. Identifying task engagement: Towards personalised interactions with educational robots. In Proceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013. IEEE, 655–658. https://doi.org/10.1109/ACII.2013.114
  • Deng et al. (2019) Eric Deng, Bilge Mutlu, and Maja J. Mataric. 2019. Embodiment in Socially Interactive Robots. Foundations and Trends in Robotics 7, 4 (2019), 251–356. https://doi.org/10.1561/2300000056
  • Epley et al. (2007) Nicholas Epley, Adam Waytz, and John T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review 114, 4 (2007), 864–886. https://doi.org/10.1037/0033-295X.114.4.864 arXiv:epley2007
  • Eyssel et al. (2012) Friederike A. Eyssel, Dieta Kuchenbrandt, Simon Bobinger, Laura de Ruiter, and Frank Hegel. 2012. ’If you sound like me, you must be more human’. In Proceedings of the seventh annual ACM/IEEE International Conference on Human-Robot Interaction - HRI ’12. ACM Press, New York, New York, USA, 125. https://doi.org/10.1145/2157689.2157717
  • Faul et al. (2007) Franz Faul, Edgar Erdfelder, Albert G. Lang, and Axel Buchner. 2007. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 39, 2 (2007), 175–191. https://doi.org/10.3758/BF03193146
  • Fink (2012) Julia Fink. 2012. Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction. In International Conference on Social Robotics. 199–208.
  • Fischer et al. (2013) Kerstin Fischer, Katrin Lohan, Joe Saunders, Chrystopher Nehaniv, Britta Wrede, and Katharina Rohlfing. 2013. The impact of the contingency of robot feedback on HRI. In Proceedings of the 2013 International Conference on Collaboration Technologies and Systems, CTS 2013. IEEE, 210–217. https://doi.org/10.1109/CTS.2013.6567231
  • Ghazali et al. (2019) Aimi S. Ghazali, Jaap Ham, Panos Markopoulos, and Emilia Barakova. 2019. Investigating the Effect of Social Cues on Social Agency Judgement. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 586–587. https://doi.org/10.1109/HRI.2019.8673266
  • Graaf (2019) Maartje M. A. De Graaf. 2019. People’s Explanations of Robot Behavior Subtly Reveal Mental State Inferences. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 239–248.
  • Gray et al. (2007) Heather M. Gray, Kurt Gray, and Daniel M. Wegner. 2007. Dimensions of Mind Perception. Science 315, 5812 (feb 2007), 619–619. https://doi.org/10.1126/science.1134475
  • Groom et al. (2011) Victoria Groom, Vasant Srinivasan, Cindy L. Bethel, Robin Murphy, Lorin Dole, and Clifford Nass. 2011. Responses to robot social roles and social role framing. In 2011 International Conference on Collaboration Technologies and Systems (CTS). IEEE, 194–203. https://doi.org/10.1109/CTS.2011.5928687
  • Hoffmann and Krämer (2011) Laura Hoffmann and Nicole C. Krämer. 2011. How should an artificial entity be embodied? Comparing the effects of a physically present robot and its virtual representation. HRI 2011 workshop on social robotic telepresence (2011), 14–20.
  • Irfan et al. (2019) Bahar Irfan, Aditi Ramachandran, Samuel Spaulding, Dylan F. Glas, Iolanda Leite, and Kheng L. Koay. 2019. Personalization in Long-Term Human-Robot Interaction. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2019-March. IEEE, 685–686. https://doi.org/10.1109/HRI.2019.8673076
  • Ishii and Watanabe (2019) Tatsunori Ishii and Katsumi Watanabe. 2019. How People Attribute Minds to Non-Living Entities. In 2019 11th International Conference on Knowledge and Smart Technology, KST 2019. Institute of Electrical and Electronics Engineers Inc., 213–217. https://doi.org/10.1109/KST.2019.8687324
  • Jung et al. (2013) Malte F. Jung, Jin J. Lee, Nick Depalma, Sigurdur O. Adalgeirsson, Pamela J. Hinds, and Cynthia Breazeal. 2013. Engaging robots: Easing complex human-robot teamwork using backchanneling. Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 1555–1566. https://doi.org/10.1145/2441776.2441954
  • Koda et al. (2016) Tomoko Koda, Yuta Nishimura, and Tomofumi Nishijima. 2016. How robot’s animacy affects human tolerance for their malfunctions?. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2016-April. IEEE, 455–456. https://doi.org/10.1109/HRI.2016.7451803
  • Kose-Bagci et al. (2008) Hatice Kose-Bagci, Kerstin Dautenhahn, and Chrystopher L. Nehaniv. 2008. Emergent dynamics of turn-taking interaction in drumming games with a humanoid robot. In RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 346–353. https://doi.org/10.1109/ROMAN.2008.4600690
  • Kwon et al. (2016) Minae Kwon, Malte F. Jung, and Ross A. Knepper. 2016. Human expectations of social robots. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vol. 2016-April. IEEE, 463–464. https://doi.org/10.1109/HRI.2016.7451807
  • Laham (2009) Simon M. Laham. 2009. Expanding the moral circle: Inclusion and exclusion mindsets and the circle of moral regard. Journal of Experimental Social Psychology 45, 1 (2009), 250–253. https://doi.org/10.1016/j.jesp.2008.08.012
  • Ligthart and Truong (2015) Mike Ligthart and Khiet P. Truong. 2015. Selecting the right robot: Influence of user attitude, robot sociability and embodiment on user preferences. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication 2015-Novem (2015), 682–687. https://doi.org/10.1109/ROMAN.2015.7333598
  • Malle et al. (2016) Bertram F. Malle, Matthias Scheutz, Jodi Forlizzi, and John Voiklis. 2016. Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2016-April. IEEE Computer Society, 125–132. https://doi.org/10.1109/HRI.2016.7451743
  • Marchesi et al. (2019) Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru Baykara, and Agnieszka Wykowska. 2019. Do We Adopt the Intentional Stance Toward Humanoid Robots? Frontiers in Psychology 10 (2019), 450. https://doi.org/10.3389/fpsyg.2019.00450
  • Mirnig et al. (2017) Nicole Mirnig, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, and Manfred Tscheligi. 2017. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot. Frontiers in Robotics and AI 4 (may 2017), 1–15. https://doi.org/10.3389/frobt.2017.00021
  • Mutlu et al. (2006) Bilge Mutlu, Jodi Forlizzi, and Jessica Hodgins. 2006. A storytelling robot: Modeling and evaluation of human-like gaze behavior. In Proceedings of the 2006 6th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS. 518–523. https://doi.org/10.1109/ICHR.2006.321322
  • Nomura et al. (2019a) Tatsuya Nomura, Takayuki Kanda, and Sachie Yamada. 2019a. Measurement of Moral Concern for Robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 540–541. https://doi.org/10.1109/HRI.2019.8673095
  • Nomura et al. (2019b) Tatsuya Nomura, Kazuki Otsubo, and Takayuki Kanda. 2019b. Preliminary Investigation of Moral Expansiveness for Robots. Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO 2018-September (2019), 91–96. https://doi.org/10.1109/ARSO.2018.8625717
  • Park et al. (2011) Eunil Park, Hwayeon Kong, Hyeong T. Lim, Jongsik Lee, Sangseok You, and Angel P. Del Pobil. 2011. The effect of robot’s behavior vs. appearance on communication with humans. In HRI 2011 - Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction. IEEE, 219–220. https://doi.org/10.1145/1957656.1957740
  • Phillips et al. (2018) Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle. 2018. What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. In ACM/IEEE International Conference on Human-Robot Interaction. ACM Press, New York, New York, USA, 105–113. https://doi.org/10.1145/3171221.3171268
  • Rea and Young (2018) Daniel J. Rea and James E. Young. 2018. It’s All in Your Head: Using Priming to Shape an Operator’s Perceptions and Behavior during Teleoperation. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI ’18. 32–40. https://doi.org/10.1145/3171221.3171259
  • Rea and Young (2019) Daniel J. Rea and James E. Young. 2019. Methods and Effects of Priming a Teloperator’s Perception of Robot Capabilities. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2019-March. IEEE, 739–741. https://doi.org/10.1109/HRI.2019.8673186
  • Salem et al. (2013) Maha Salem, Friederike A. Eyssel, Katharina Rohlfing, Stefan Kopp, and Frank Joublin. 2013. To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics 5, 3 (aug 2013), 313–323. https://doi.org/10.1007/s12369-013-0196-9
  • Schellen and Wykowska (2019) Elef Schellen and Agnieszka Wykowska. 2019. Intentional mindset toward robots-open questions and methodological challenges. Frontiers in Robotics AI 6, JAN (jan 2019). https://doi.org/10.3389/frobt.2018.00139
  • Schneider and Kummert (2017) Sebastian Schneider and Franz Kummert. 2017. Does the user’s evaluation of a socially assistive robot change based on presence and companionship type?. In ACM/IEEE International Conference on Human-Robot Interaction. 277–278. https://doi.org/10.1145/3029798.3038418
  • Seo et al. (2015) Stela H. Seo, Denise Geiskkovitch, Masayuki Nakane, Corey King, and James E. Young. 2015. Poor Thing! Would You Feel Sorry for a Simulated Robot?: A comparison of empathy toward a physical and a simulated robot. ACM/IEEE International Conference on Human-Robot Interaction 2015-March, 125–132. https://doi.org/10.1145/2696454.2696471
  • Short et al. (2010) Elaine Short, Justin Hart, Michelle Vu, and Brian Scassellati. 2010. No fair!! An interaction with a cheating robot. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 219–226. https://doi.org/10.1109/HRI.2010.5453193
  • Stenzel et al. (2012) Anna Stenzel, Eris Chinellato, Maria A. T. Bou, Ángel P. del Pobil, Markus Lappe, and Roman Liepelt. 2012. When humanoid robots become human-like interaction partners: Corepresentation of robotic actions. Journal of Experimental Psychology: Human Perception and Performance 38, 5 (oct 2012), 1073–1077. https://doi.org/10.1037/a0029493
  • Thellman and Ziemke (2017) Sam Thellman and Tom Ziemke. 2017. Social Attitudes Toward Robots are Easily Manipulated. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI ’17. ACM Press, New York, New York, USA, 299–300. https://doi.org/10.1145/3029798.3038336
  • Tsiakas et al. (2017) Konstantinos Tsiakas, Maher Abujelala, Alexandros Lioulemes, and Fillia Makedon. 2017. An intelligent Interactive Learning and Adaptation framework for robot-based vocational training. 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016 October (2017). https://doi.org/10.1109/SSCI.2016.7850066
  • Vanman and Kappas (2019) Eric J. Vanman and Arvid Kappas. 2019. ”Danger, Will Robinson!” The challenges of social robots for intergroup relations. Social and Personality Psychology Compass 13, 8 (aug 2019), 1–13. https://doi.org/10.1111/spc3.12489
  • Westlund et al. (2016) Jacqueline M. K. Westlund, Marayna Martinez, Maryam Archie, Madhurima Das, and Cynthia Breazeal. 2016. A study to measure the effect of framing a robot as a social agent or as a machine on children’s social behavior. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2016-April. IEEE Computer Society, 459–460. https://doi.org/10.1109/HRI.2016.7451805
  • Wiese et al. (2017) Eva Wiese, Giorgio Metta, and Agnieszka Wykowska. 2017. Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Frontiers in psychology 8, OCT (oct 2017). https://doi.org/10.3389/fpsyg.2017.01663
  • Wiese et al. (2012) Eva Wiese, Agnieszka Wykowska, Jan Zwickel, and Hermann J. Mueller. 2012. I See What You Mean: How Attentional Selection Is Shaped by Ascribing Intentions to Others. In PloS one.
  • Wigdor et al. (2016) Noel Wigdor, Joachim de Greeff, Rosemarijn Looije, and Mark A. Neerincx. 2016. How to improve human-robot interaction with Conversational Fillers. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 219–224. https://doi.org/10.1109/ROMAN.2016.7745134
  • Wills et al. (2016) Paul Wills, Paul Baxter, James Kennedy, Emmanuel Senft, and Tony Belpaeme. 2016. Socially contingent humanoid robot head behaviour results in increased charity donations. In ACM/IEEE International Conference on Human-Robot Interaction, Vol. 2016-April. IEEE, 533–534. https://doi.org/10.1109/HRI.2016.7451842
  • Wykowska et al. (2014) Agnieszka Wykowska, Eva Wiese, Aaron Prosser, and Hermann J. Mueller. 2014. Beliefs about the Minds of Others Influence How We Process Sensory Information. In PloS one.
  • Złotowski et al. (2015) Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction. International Journal of Social Robotics 7, 3 (jun 2015), 347–360. https://doi.org/10.1007/s12369-014-0267-6