Verbal Disinhibition towards Robots is Associated with General Antisociality

The emergence of agentic technologies (e.g., robots) in increasingly public realms (e.g., social media) has revealed surprising antisocial tendencies in human-agent interactions. In particular, there is growing indication of people's propensity to act aggressively towards such systems - without provocation and unabashedly so. Towards understanding whether this aggressive behavior is anomalous or whether it is associated with general antisocial tendencies in people's broader interactions, we examined people's verbal disinhibition towards two artificial agents. Using Twitter as a corpus of free-form, unsupervised interactions, we identified 40 independent Twitter users who tweeted abusively or non-abusively at one of two high-profile robots with Twitter accounts (TMI's Bina48 and Hanson Robotics' Sophia). Analysis of 50 of each user's tweets most proximate to their tweet at the respective robot (N=2,000) shows people's aggression towards the robots to be associated with more frequent abuse in their general tweeting. The findings thus suggest that disinhibition towards robots is not necessarily a pervasive tendency, but rather one driven by individual differences in antisociality. Nevertheless, such unprovoked abuse highlights a need for attention to the reception of agentic technologies in society, as well as the necessity of corresponding capacities to recognize and respond to antisocial dynamics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

03/22/2018

Caring robots are here to help

Robots, along with sensors and telemedicine, have been identified as tec...
08/07/2017

Regulating Highly Automated Robot Ecologies: Insights from Three User Studies

Highly automated robot ecologies (HARE), or societies of independent aut...
03/26/2022

Crime and social environments: Differences between misdemeanors and felonies

Owing to the growing population density of urban areas, many people are ...
08/28/2019

A Large-Scale Empirical Study of Geotagging Behavior on Twitter

Geotagging on social media has become an important proxy for understandi...
05/09/2020

Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections

Adversarial interactions against politicians on social media such as Twi...
09/26/2014

Recommending Investors for Crowdfunding Projects

To bring their innovative ideas to market, those embarking in new ventur...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Agentic technologies – from disembodied AIs, to virtual agents, to highly humanlike robots – are increasingly pervading public domains. Disembodied AI assistants, such as Apple’s Siri and Amazon’s Alexa, have been available in consumer markets for nearly a decade. Consumer- and enterprise-oriented robotic platforms, particularly those geared towards social engagement (e.g., Anki’s Cozmo, Ugobe’s Pleo, and Softbank’s Pepper), have already achieved moderate commercial success111See, for example: goo.gl/icsfYh. and are beginning to emerge in public spaces around the globe222For example: goo.gl/SwJYBY, goo.gl/UoX7oB, and goo.gl/x3v42f.. Furthermore, although virtual agents remain largely within academic settings, they show substantial potential for widespread public deployment across numerous industries including education (e.g., (Hew and Cheung, 2010; Monahan et al., 2018)), medicine (e.g., doctor-patient communications (Bickmore et al., 2007)), patient assistance (Bickmore et al., 2009)), and therapy (e.g., evaluation (Lucas et al., 2015) and counseling (Lisetti et al., 2013)).

Figure 1. David Smith’s and Frauke Zeller’s hitchhiking robot, “hitchBOT”. Shown is hitchBOT’s original embodiment (left) and hitchBOT’s decapitated and literally dis-armed remains after being vandalized in Philadelphia (right).

1.1. Aggression in Human-Agent Interactions

A natural result of their increased presence is that artificial agents are increasingly available for free-form, unsupervised interactions with the general public. Observation of interactions in these more naturalistic settings have, in turn, brought to light people’s apparent aggression toward agentic technologies. For example:

  • Ugobe’s Pleo: In 2007, DVICE released a video333https://youtu.be/pQUCd4SbgM0 of a couple of staff members subjecting a Pleo robot to a series of abusive tests (including hitting the robot, smashing it against a table, and strangling it), which ultimately resulted in the robot’s “death”. Though the tests were well-reasoned (to understand how Pleo acts in and responds to certain situations), the staffers’ laughter throughout the abuse reflects a lack of empathy – despite the robot embodying several responses intended to induce empathetic responding. Moreover, the video’s meta data, which shows it to have garnered several hundred explicit likes, suggests that the staffers were not alone in their amusement.

  • Smith and Zeller’s hitchBOT: In 2014, researchers David Smith and Frauke Zeller launched a social experiment with their “hitchBOT” – a robot designed to navigate and travel substantial distances by hitchhiking. After two initial deployments (traveling across Canada, from Halifax to Victoria; as well as around Germany), hitchBOT was decapitated just two weeks into its deployment in the United States (see Figure 1).

  • Microsoft’s Tay: In 2016, Microsoft launched a similarly ill-fated social experiment – deploying a chatbot (“Tay”) they had developed via Twitter. Within 16 hours after its release, Tay, which was designed to learn from its interactions, morphed from its initial “cheery teenage girl” persona into a sexist, genocidal racist – a direct result of the deluge of abuse that people directed at the bot.444https://goo.gl/nFEfS1

Similar observations have appeared in academic discourse as well. Verbal abuse comprises a substantial portion of people’s commentary toward artificial agents, with observed frequencies ranging from 10% (e.g., (De Angeli and Brahnam, 2008)) to over 40% (e.g., (Strait et al., 2017)). In interactions with agents that have physical embodiment, verbal abuses readily escalate to physical violence (including kicking, punching, and slapping; (Brscić et al., 2015; Salvini et al., 2010)). Furthermore, the aggression occurs with or without supervision. For example, in the supervised deployment of a virtual agent in educational settings, nearly 40% of students were abusive toward the agent, employing, in particular, hypersexualizing and overtly dehumanizing commentary (e.g., “shut up u hore”, “want to give me a blow job”, “are you a lesbian?”; (Veletsianos et al., 2008)).

This deviation from socially normative behavior is not particularly surprising when considered alongside broader experimental research, which reflects a gap in the degree to which people empathize with artificial agents relative to the degree to which we empathize with other people (e.g., (Bartneck et al., 2005; Rosenthal-von der Pütten et al., 2013)). Specifically, while there is ample evidence that people treat agentic technologies like they do people (the “media equation” (Reeves and Nass, 1996)), the treatment is not equivalent. For example, people’s empathy towards a robot monotonically decreases from androids (highly humanlike robots) to robots of more mechanomorphic appearances (Riek et al., 2009). That is, the less human an agent seems, the less people empathize. People also exhibit less empathy when observing a robot’s (versus a person’s) abuse (Rosenthal-von der Pütten et al., 2013), more readily engage in the abuse of a robot (versus of a person; (Bartneck et al., 2005)), and are generally unmoved by a robot’s pleas for sympathy (e.g., (Briggs et al., 2015; Brscić et al., 2015; Jung et al., 2015; Tan et al., 2018)).

1.2. Implications & Considerations

These antisocial tendencies (aggression toward, and limited empathy for, agentic technologies) are especially problematic for two reasons in particular. First, while aggression may not necessarily pose harm to a nonhuman target, aggression in the context of multi-party interactions negatively impacts bystanders who are witness to the abuse (Zapf et al., 2011). Second, aggression toward humanlike robots – which embody identity characteristics (e.g., gender) – may facilitate subsequent aggression toward people who share identity characteristics with the abused robots. For example, stereotypic abuse of a female-gendered robot (e.g., via sexualization) may reinforce stereotypes the aggressor has of women, resulting in greater expression of bias in subsequent interactions with women.

It is thus critical for artificial agents to be able to respond to manifestations of aggression if and when it arises. To respond, however, requires that the agent has the capacity to recognize aggression. And to accurately and reliably recognize aggression requires, first, identification of the relevant information channels and cues that communicate aggression. To that end, recent work has identified a range of associated factors (e.g., the agent’s gendering (Brahnam and De Angeli, 2012), racialization (Strait et al., 2018), and size (Lucas et al., 2016)).

Not all people, however, exhibit aggressive tendencies toward robots. For example, deployment of a delivery robot in medical settings showed that while some staff treated the robot poorly and locked it away when they could, others treated the robot relatively well, using the robot to make their daily routine more efficient (Mutlu and Forlizzi, 2008). Indeed, the majority do not condone (Tan et al., 2018)

nor exhibit aggression, with estimates as to the prevalence ranging from

(explicit physical abuse; (Brscić et al., 2015)) to just under (verbal aggression; (Strait et al., 2017)).

1.3. Present Work

Thus, towards better understanding differences amongst individual engagement in the aggressive treatment of agentic technologies, we examined the relationship between people’s verbal abuse towards two robots – TMI’s Bina48 and Hanson Robotics’ Sophia (see Figure 2) – and verbally aggressive tendencies in their interactions with other people. Specifically, we sought to determine whether people’s aggression toward the given robots is spontaneous or whether it is consistent with a broader pattern of aggression. That is, is this a general phenomenon or does it align with individuals’ prosociality (or rather, lack thereof).

To acquire data representative of more naturalistic (free-form, unsupervised) interactions with robots than what is available in controlled laboratory settings, we elected to scrape Twitter for commentary directed towards two robots with active accounts on Twitter. For greater comparability to recent literature (e.g., (Sánchez Ramos et al., 2018; Strait et al., 2017, 2018)), we utilized robot targets (versus other categories of agentic systems). Towards mitigating associations stemming from any particular embodiment, we utilized two targets.

From people’s tweets at the two robots, we identified 40 distinct Twitter users (20 per robot) who tweeted abusively or non-abusively at the given robot. We effected a quasi-manipulation of user type (two levels: abusive versus non-abusive towards robots) via identification of users (with and per robot). We then scraped 50 of each user’s tweets closest (in time) to their originating tweet () in order to evaluate the association between aggression towards the robots and the prevalence of abuse in users’ broader tweeting.

Figure 2. The two robots involved – TMI’s Bina48 (left) and Hanson Robotics’ Sophia (right).

2. Method

We conducted an online, quasi-experimental evaluation () of the association between aggression in HRI and in human-human social dynamics.

2.1. Design

Given indications from existing literature that abusive human-agent interactions more frequently manifest in free-form, unsupervised contexts, we utilized Twitter (which hosts accounts for several publicized robot platforms such as Hanson Robotics’ Sophia) as a source of similar interaction data. Specifically, given greater disinhibition in online spaces (Suler, 2004), we expected to better capture abusive interactions that may not arise in more controlled contexts. Furthermore, the interaction modality enables more naturalistc human-robot interactions than traditional laboratory settings (Sabanovic et al., 2006), which may better capture the public’s perceptions of emergent platforms.

Here we defined “abusive” as any content that is dehumanizing in nature. Specifically, if a tweet contained content that was objectifying (including overt sexualization, (Moradi and Huang, 2008) and ambivalent and benevolent sexism (Glick and Fiske, 1996)), racist (e.g., evocative of race-based stereotypes (Allport, 1954)), generally offensive (e.g., such as calling the robot stupid (Brscić et al., 2015)), and/or violent (verbally hostile or threatening of physical violence) towards the given agent – it was coded as abusive.

2.2. Manipulation

We effected a quasi-manipulation of user type (abusive versus non-abusive) via selection of the users ( per robot, with abusive and non-abusive each). To identify the users, we scraped all available Twitter mentions at Bina48 and Sophia (on March 22, 2018). A total of tweets (; ) were returned – a subset of which (; ) were then coded by a research assistant blind to the research questions on a single, binary dimension: whether a given tweet contains abusive content (1) or not (0).

Source Mentions Coded Retained
@iBina48 648 648
@RealSophiaRobot 8,849
Table 1. Source information from which the quasi-manipulation of user type (abusive versus non-abusive) was effected. “Mentions”, “coded”, and “retained” refers to the number of tweets scraped, analyzed, and retained for selection of the 40 users.

A threshold of tweets for coding was set a priori based on existing literature (using the lowest frequency of abuse reported in online contexts – 10% of commentary (De Angeli and Brahnam, 2008)). Although the expected proportion of abusive commentary (100) exceeds the number of abusive users needed (10), we set a higher threshold in anticipation of a lower frequency of abusive commentary (e.g., due to content moderation by the account managers) and loss of data (e.g., discarding of repeat tweets from the same user). The criteria for retention were as follows:

  • Independence: We aimed to identify independent users; thus, multiple tweets from a single user were excluded (except for one randomly selected tweet of the user’s tweets). In addition, tweets which were replies to other users were excluded.

  • Decipherability: Any tweets that were indecipherable (e.g., due to lack of context) were excluded. For example, the tweet – ““iBina48: Cyber space” #pii2013” – was excluded.

From the tweets remaining post-coding (; ), we randomly selected users ( with an abusive and with a non-abusive tweet at the given robot) for each robot.

2.3. Data Acquisition & Annotation

For each of the users selected (to effect the quasi-manipulation of user type), we scraped the user’s 50 tweets most proximate to and centered around (i.e., 25 pre- and 25 post-) the user’s originating tweet at one of the robots. This scraping was completed between February 22 and March 02, 2018 and yielded a total of tweets for analysis. Each of the tweets were coded on a binary dimension (0 or 1) for the presence of abusive content – which was then used to compute an overall frequency of abuse for each of the users. As verification of the coding reliability, a second coder independently coded 10% of the tweets. Calculation of Cohen’s confirmed high inter-rater reliability ().

3. Results

Similar to rates reported in literature on verbal disinhibition towards chatbots (e.g., (De Angeli and Brahnam, 2008)), the overall frequency of dehumanizing content across users comprised approximately 10% of the Twitter-based interactions (, ).

To evaluate the association between user type (abusive versus non-abusive

towards robots) and the frequency of abusive content in a user’s general tweeting, we conducted an analysis of variance (ANOVA) with significance evaluated at a standard

-level of . Due to different racializations of the two robots (Bina48 is racialized as Black, Sophia is racialized as White), we included robot racialization as a covariate in the statistical model.

The results of the ANOVA showed a main effect of user type (abusive versus non-abusive towards robots) on the frequency of dehumanizing content in users’ broader Twitter communications: , , . Specifically, the users identified in the coding process as abusive were much more frequently abusive in their general tweeting (, ) than were non-abusive users (, ; Cohen’s ). We additionally confirmed, via a post-hoc power analysis using the found effect size, that the study was adequately powered () to capture the given differences.

4. Discussion

4.1. Summary of Findings

The present study served as a preliminary investigation into individual differences in aggression towards agentic technologies. Via an analysis of public tweet data, we found a significant association between people’s antisociality and their abuse of two robots.

Given the methods used (wherein we evaluated the prevalence of aggression in each user’s 50 surrounding tweets), there are two possible interpretations of this association: (1) A person’s aggression towards the robots is associated with an antisocial personality (i.e., relatively unchanging demeanor). (2) Or, it may be that a person’s aggression towards the robots resulted during a period of general negative affect (i.e., temporally-constrained aggression).

4.2. Implications

Assumming the first interpretation (aggression towards robots is associated with antisocial personality), then manifestations of aggression might be predicted by tracking of indicative personality characteristics and averted by proactive avoidance of interlocutors identified as generally antisocial. Assuming the aggression resulted from negative affect, then tracking interlocutors’ general affect (e.g., positive, neutral, or negative) may facilitate prediction of potential aggression. In this case, manifestations of aggression might be mitigated via targeted intervention to regulate the aggressor’s emotional state.

Assuming either interpretation, the findings indicate, in particular, that in addition to linguistic content analysis, construction and maintenance of models of interlocutors encountered may be important to the prediction and recognition of aggression in HAIs. For example, if an interlocutor shows general aggressive tendencies, this may be a valuable heuristic toward deciding, subsequently, whether given data (e.g., linguistic utterance) is likely aggressive or not. Or, if an interlocutor exhibits emotional agitation, recognition by the agent could cue an intervention such as an exercise in emotion regulation.

More generally, the findings underscore a need for respective agent capacities (to recognize and respond to aggression). This is especially relevant in multi-party contexts, wherein aggressive treatment of an artificial agent may have broader impacts both on immediate bystanders and on subsequent interactions involving the aggressor (e.g., facilitation of the dehumanization of people sharing identity characteristics with the targeted agent). However, responding to aggression requires, first, that the agent can reliably detect it when it manifests. And the present findings indicate that detection may be significantly facilitated by modeling individual interlocutors in addition to explicit conversational content.

4.3. Limitations & Avenues for Future Research

There are a number of limitations to the present study, which serve to highlight avenues for future research. In particular, we conducted an online evaluation of the association between people’s general degree of aggression and their aggression towards two robots. However, more representative interaction settings (i.e., ecological validity), as well as broader sampling across platforms (i.e., more than two robots) and agent types (i.e., chatbots, virtual agents, and robots), is needed to understand how the findings extend to actual human-robot interactions and more generally, to human-agent interactions. In addition, given the two possible interpretations to the present findings (association with personality and/or affect), further research is needed to determine which interpretation – and corresponding, which approach to responding – is appropriate (if not both).

5. Conclusions

Towards understanding individual differences in aggressive tendencies in human-agent interactions, we examined people’s verbal disinhibition in their tweeting at two robots and in their broader interactions. Using Twitter as a corpus of free-form, unsupervised interactions, we identified independent Twitter users who tweeted abusively or non-abusively at one of two robots with Twitter accounts (TMI’s Bina48 and Hanson Robotics’ Sophia). Analysis of each user’s 50 tweets most proximate to their tweet at the robot shows people’s abuse of the robots aligns with more frequent abuse in their general tweeting. The findings thus suggest that disinhibition towards robots is not necessarily a pervasive tendency, as it is significantly associated with general antisocial behavior. While interpretation of the findings is constrained by methodological limitations, such unprovoked abuse nevertheless highlights a need for particular attention to the social capacities of agentic systems and suggests that maintenance of a user-specific model may facilitate predication and interpretation of aggression in HAIs.

References

  • (1)
  • Allport (1954) Gordon W Allport. 1954. The nature of prejudice. (1954).
  • Bartneck et al. (2005) Christoph Bartneck, Chioke Rosalia, Rutger Menges, and Inèz Deckers. 2005. Robot abuse – a limitation of the media equation. In Proc. Interact 2005: Workshop on Agent Abuse.
  • Bickmore et al. (2009) Timothy W Bickmore, Laura M Pfeifer, and Brian W Jack. 2009. Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 1265–1274.
  • Bickmore et al. (2007) Timothy W Bickmore, Laura M Pfeifer, and Michael K Paasche-Orlow. 2007. Health document explanation by virtual agents. In International Workshop on Intelligent Virtual Agents. Springer, 183–196.
  • Brahnam and De Angeli (2012) Sheryl Brahnam and Antonella De Angeli. 2012.

    Gender affordances of conversational agents.

    Interacting with Computers (2012).
  • Briggs et al. (2015) Gordon Briggs, Ian McConnell, and Matthias Scheutz. 2015. When robots object: Evidence for the utility of verbal, but not necessarily spoken protest. In International Conference on Social Robotics. Springer, 83–92.
  • Brscić et al. (2015) Drazen Brscić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from children’s abuse of social robots. In Proc. HRI.
  • De Angeli and Brahnam (2008) Antonella De Angeli and Sheryl Brahnam. 2008. I hate you! Disinhibition with virtual partners. Interacting with Computers (2008).
  • Glick and Fiske (1996) Peter Glick and Susan T Fiske. 1996. The ambivalent sexism inventory: Differentiating hostile and benevolent sexism. Journal of Personality and Social Psychology (1996).
  • Hew and Cheung (2010) Khe Foon Hew and Wing Sum Cheung. 2010. Use of three-dimensional (3-D) immersive virtual worlds in K-12 and higher education settings: A review of the research. British journal of educational technology 41, 1 (2010), 33–55.
  • Jung et al. (2015) Malte F Jung, Nikolas Martelaro, and Pamela J Hinds. 2015. Using robots to moderate team conflict: the case of repairing violations. In Proc. HRI.
  • Lisetti et al. (2013) Christine Lisetti, Reza Amini, Ugan Yasavur, and Naphtali Rishe. 2013. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Transactions on Management Information Systems (TMIS) 4, 4 (2013), 19.
  • Lucas et al. (2015) Gale M Lucas, Jonathan Gratch, Stefan Scherer, Jill Boberg, and Giota Stratou. 2015. Towards an affective interface for assessment of psychological distress. In Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on. IEEE, 539–545.
  • Lucas et al. (2016) Houston Lucas, Jamie Poston, Nathan Yocum, Zachary Carlson, and David Feil-Seifer. 2016. Too big to be mistreated? Examining the role of robot size on perceptions of mistreatment. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 1071–1076.
  • Monahan et al. (2018) Shannon Monahan, Emmanuel Johnson, Gale Lucas, James Finch, and Jonathan Gratch. 2018. Autonomous Agent that Provides Automated Feedback Improves Negotiation Skills. In International Conference on Artificial Intelligence in Education. Springer, 225–229.
  • Moradi and Huang (2008) Bonnie Moradi and Yu-Ping Huang. 2008. Objectification theory and psychology of women: A decade of advances and future directions. Psychology of Women Quarterly (2008).
  • Mutlu and Forlizzi (2008) Bilge Mutlu and Jodi Forlizzi. 2008. Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction. ACM, 287–294. https://doi.org/10.1145/1349822.1349860
  • Reeves and Nass (1996) Byron Reeves and Clifford Nass. 1996. How people treat computers, television, and new media like real people and places.
  • Riek et al. (2009) Laurel D Riek, Tal-Chen Rabinowitch, Bhismadev Chakrabarti, and Peter Robinson. 2009. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In Proc. ACII.
  • Rosenthal-von der Pütten et al. (2013) Astrid M Rosenthal-von der Pütten, Nicole C Krämer, Laura Hoffmann, Sabrina Sobieraj, and Sabrina C Eimler. 2013. An experimental study on emotional reactions towards a robot. IJSR (2013).
  • Sabanovic et al. (2006) Selma Sabanovic, Marek P Michalowski, and Reid Simmons. 2006. Robots in the wild: Observing human-robot social interaction outside the lab. In IEEE International Workshop on Advanced Motion Control.
  • Salvini et al. (2010) Pericle Salvini, Gaetano Ciaravella, Wonpil Yu, Gabriele Ferri, Alessandro Manzi, Barbara Mazzolai, Cecilia Laschi, Sang-Rok Oh, and Paolo Dario. 2010. How safe are service robots in urban environments? Bullying a Robot. In Proc. RO-MAN.
  • Sánchez Ramos et al. (2018) Ana C Sánchez Ramos, Virginia Contreras, Alejandra Santos, Cynthia Aguillon, Noemi Garcia, Jesus D Rodriguez, Ivan Amaya Vazquez, and Megan K Strait. 2018. A Preliminary Study of the Effects of Racialization and Humanness on the Verbal Abuse of Female-Gendered Robots. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 227–228.
  • Strait et al. (2017) Megan Strait, Cynthia Aguillon, Virginia Contreras, and Noemi Garcia. 2017. Online Social Commentary Reflects an Appearance-Based Uncanny Valley, a General Fear of a “Technology Takeover”, and the Unabashed Sexualization of Female-Gendered Robots. In Proc. RO-MAN.
  • Strait et al. (2018) Megan Strait, Ana Sánchez Ramos, and Virginia Contreras. 2018. Robots Racialized in the Likeness of Marginalized Social Identities are Subject to Greater Dehumanization than those racialized as White. In Proceedings of the 27th IEEE International Conference on Robot and Human Interactive Communication. IEEE.
  • Suler (2004) John Suler. 2004. The online disinhibition effect. Cyberpsychology & behavior (2004).
  • Tan et al. (2018) Xiang Zhi Tan, Marynel Vázquez, Elizabeth J Carter, Cecilia G Morales, and Aaron Steinfeld. 2018. Inducing Bystander Interventions During Robot Abuse with Social Mechanisms. In Proc. HRI.
  • Veletsianos et al. (2008) George Veletsianos, Cassandra Scharber, and Aaron Doering. 2008. When sex, drugs, and violence enter the classroom: Conversations between adolescents and a female pedagogical agent. Interacting with Computers (2008).
  • Zapf et al. (2011) Dieter Zapf, Jordi Escartín, Ståle Einarsen, Helge Hoel, and Maarit Vartia. 2011. Empirical findings on prevalence and risk groups of bullying in the workplace. Bullying and harassment in the workplace: Developments in theory, research, and practice 2 (2011), 75–106.