The use of voice and chat-based conversational agents is on the rise. Facebook has recently announced that the number of bots on Messenger Platform exceeded 300,000 [boiteux2018messenger]. Gartner forecasts by the year 2022 chatbots will get involved in 85 % of all customer service interactions [bharaj2017gartner]. And a recent survey by PwC reported that over 700 US participants out of 1,000 use intelligent voice assistants, such as the Apple Siri, Amazon Alexa, or others to facilitate their everyday tasks [pwc2018]. Hundreds of thousands of conversational agents are rapidly emerging in a diverse range of application domains. There are mainly two types of such agents [gao2019convai]: those that can accomplish specific tasks for users (task-oriented) and those that offer opportunities for users to talk about diverse topics or be entertained (chitchat). While agents of the former type are currently the most widespread, to build a truly natural interaction with the latter type is among the most challenging tasks due to its open-domain nature [Grudin2019chatbots]. While we do not specifically use the term “open-domain chitchat” in our discussions with users, we are primarily focused on the qualities of the second type.
Some researchers have investigated the perceptions, expectations, and concerns surrounding the use of conversational agents. Many previous studies pointed out the importance of naturalness, i.e. human-like qualities, in this technology [Luger2016badpa, brandtzaeg2017people, Zamora2017sorry, thies2017how, Jain2018chatbots, neururer2018perceptions]. The most frequently mentioned components comprising a conversational agent’s naturalness are: responding coherently with the preceding context [Luger2016badpa, thies2017how, Jain2018chatbots, neururer2018perceptions, Shum2018], anticipating user needs and questions [Luger2016badpa, thies2017how, Shum2018], understanding culture- or language-specific terms [Zamora2017sorry, neururer2018perceptions], facilitating input and response diversity [Luger2016badpa, thies2017how, Jain2018chatbots, Muresan2019chats, Shum2018], and developing a consistent personality [thies2017how, Jain2018chatbots, neururer2018perceptions, Shum2018]. Although researchers emphasized the importance of integrating emotional intelligence [brandtzaeg2017people, thies2017how, Zamora2017sorry, Shum2018], no prior work has focused specifically on user expectations of the emotional qualities of chatbots. To address this gap, we have decided to conduct qualitative user studies to deepen our understanding of their expectations, especially concerning what emotional skills users would expect from artificial conversational agents.
We present the results of 18 semi-structured interviews with potential technology users. In the following, we first describe some background work in affect research, as well as surveying related qualitative studies of conversational agents. Further, we describe our study design, methodology, and main findings based on 400 affinity notes. We then discuss the implications of our work by identifying 5 design guidelines that can be useful for the development of emotionally-aware chatbots.
2 Related Work
2.1 What is Emotion?
Emotion is a complex concept, which lacks an established definition even within scholars [cabanac2002emotion]. We usually understand emotion as “a feeling deriving from one’s circumstances, mood, or relationships with others” [emotiondef]. According to Darwin, emotion expressions formed through the evolutionary process [darwin1998expression]. Originally serving as a protective or action-provoking mechanism, they further adapted to take on an important communicative function. People express emotions through a variety of means, including facial expression, body language, linguistic cues, and others. This helps us understand how we should react in a particular situation and treat people around. Emotions play a crucial role in social communication, allowing us to establish bonds with one another and build more meaningful relationships. Maslow listed social belonging and emotional connection within fundamental human needs [maslow1943theory]. One might think that when it comes to human-computer interaction, emotion is no longer that important, as we tend to treat this interaction as practical and operational. Reeves and Nass [reeves1996media] refute this conviction, first stating that by nature our response to any subject of interaction initiates our evaluation of whether it is good or bad. This evaluation is a fundamental part of our reactions to the content we face, helping us protect ourselves from harm by avoiding negative experience and approach subjects promising pleasant experience. Human interaction with media and computers is no exception, that is why we restrain from watching horror movies late at night or seek for entertaining video games when feeling bored [reeves1996media]. With this premise, it generates an increased scientific interest to understand how these concepts translate to our interaction with artificial conversational agents. The urgency to address this question from the user perspective also results from the fact that previous qualitative works specifically emphasize that emotional awareness is the aspect users currently desire from the chatbots [brandtzaeg2017people, thies2017how, Zamora2017sorry].
2.2 Qualitative Studies of Chatbots
Given the growing popularity of conversational agents, a number of qualitative studies about this technology emerged. They mainly fall into two groups. Some researchers investigate user impressions of their current interactions with available chatbots, while others deliberately focus on future interaction possibilities and elicit user expectations and concerns in those contexts.
The first type of work mostly develops around user reasons for interaction with existing agents and experience with them. Luger and Sellen [Luger2016badpa] and Brandtzaeg and Folstad [brandtzaeg2017people] investigated the factors motivating and impeding user adoption of chatbots. Zamora [Zamora2017sorry] explored these questions more specifically, focusing on user preference for input modality and domains of use. Bentley et al. [Bentley2018] and Porcheron et al. [Porcheron2018]
reported longitudinal studies of user experience with popular voice agents. Muresan and Pohl[Muresan2019chats] conducted a case study with first-time and long-term users of the Replika chatbot, focusing on how human cues affect user engagement with the agent. Jain et al. [Jain2018chatbots] and Cowan et al. [cowan_what_2017] analyzed interaction experience with chatbots specifically among first-time and infrequent users. Finally, Clark et al. [clark_what_2019] researched on what characteristics are important for human conversation and how they apply to conversations with artificial agents. According to these works, transactional purposes, efficiency, and productivity, such as the possibility to obtain information faster than via other methods, are likely the major driver for chatbot usage [Luger2016badpa, brandtzaeg2017people, Jain2018chatbots, Zamora2017sorry, clark_what_2019]. At the same time, lack of trust, reliability, and transparency constitute the main user concerns. People questioned the mechanisms employed by the agent to fulfill different tasks and worried that system failure might result in social embarrassment, for example by calling or messaging a wrong person [Luger2016badpa, cowan_what_2017]. They also felt reluctant to discuss sensitive topics such as financing and social media content with the chatbots [Zamora2017sorry, cowan_what_2017].
Based on these conclusions, several of the aforementioned works questioned the appropriateness of human-like metaphor for conversational agents [cowan_what_2017, Porcheron2018, clark_what_2019]. The supportive arguments for this view maintain that participants of their studies perceived social aspects of conversational interaction as an immaterial part of chatbot’s performance [clark_what_2019], saw little need in agent’s human-like behavior for it to address user tasks [cowan_what_2017], and did not treat it as a conversationalist [Porcheron2018]. However, as the authors also noted themselves, in these studies participant views were grounded by the types of interaction facilitated by existing conversational agents. The way people perceive the technology in its current state and how they would prefer it to operate may considerably differ. Focusing exclusively on the current user experience is thus limited. These studies did not consider user goals and desires in the future, which may give designers the false impression that users do not want them.
Contrary to this practice, Neururer et al. [neururer2018perceptions] interviewed and surveyed researches from several relevant fields to determine characteristics of authenticity in chatbots. Thies et al. [thies2017how] ran an exploratory Wizard-of-Oz experiment to understand what chatbot personality traits would be preferred by their target users. Both works point out the importance of strong social conversational skills and emotional awareness for future conversational agents.
2.3 Development of Emotionally Aware Chatbots
Affective computing, initiated by Picard [picard2000affective] in the mid-1990s, is an essential aspect of human-computer interaction research. For example, earlier work showed that computer-initiated emotional support, such as demonstrating elements of active listening, empathy, and sympathy, can help users overcome frustration and manage negative emotional states [klein2002computer]. Another study by Bickmore and Picard [Bickmore2005] established that even after a long course of interaction, users found a relational agent with deliberate social-emotional skills more respectful, appealing, and trustworthy than an equivalent task-oriented agent. Building on the previous experience of affective computing community, emotionally aware conversational agents are equally believed to bring higher efficiency and engagement in human-computer interaction [McDuff2018designing]. Recent progress in neural language modeling for response generation [vinyals2015neural]
has inspired an expanding number of papers focusing on introducing emotional awareness into neural network-based chatbots[asghar2018affective, zhou2018emotional, zhou2018mojitalk, Hu2018touch, Huber2018emotional, Zhong2019affect, song2019generating, Xie2020]. Several studies designed emotion-coping approaches by adjusting the neural network structure and the training objective function to make the model produce responses following a predefined strategy [asghar2018affective, Zhong2019affect, Xie2020]. Other bodies of work employed explicit indicators, such as the use of emoji, image, or emotional category, to inform their model how to regulate the emotional response [zhou2018emotional, zhou2018mojitalk, Hu2018touch, Huber2018emotional, song2019generating]. These papers mostly discuss the technical approaches to incorporate emotional intelligence into the chatbots. They do not explore what kind of emotional interaction is expected by the eventual technology users.
So far it remains unclear what kind of emotional behavior chatbots should establish to comply with user expectations. In this work, we aim to inform the neural conversation modeling field by eliciting insights from potential users of emotionally aware agents in a qualitative study.
3.1 Study Design
Understanding the purpose of emotional intelligence in chatbots will help designers and developers create conversational agents that provide improved interaction experience. To explore user expectations and concerns about such agents, we conducted 18 semi-structured interviews with potential users. For our study, we engaged the participants who were active smartphone and computer users, without substantial prejudice against chatbots. Additionally, our recruitment strategy suggested a diverse demographic profile of interview participants to acquire extensive response patterns. All interviewees provided their consent for their data to be reported anonymously.
We recruited the participants through the snowball sampling method [Biernacki1981snowball]. In total, 18 fluent or native English speakers (10 female, 8 male) from various backgrounds took part in our study. Over a half of the participants belonged to teenage (10–19 years old, 17 %) and young adult (20–29 years old, 45 %) age groups, with the remaining participants being almost equally distributed within four older age groups from 30–39 to 60–69 years old (38 % in total). Most of the participants (67 %) were nationals of European countries, while the others represented Asian, North- and South-American countries in roughly equal ratios. In the invitation email, we provided a brief description of our research and interview procedure and informed the recipients about the incentives. Each participant was offered a small gift as a token of appreciation right after the interview, and two of them received smart speakers after a draw among all participants.
Once agreed to take part in our study, the participants were asked to complete a basic demographic survey about their age group, nationality, and occupation. The following semi-structured interviews were organized either in-person (11 cases), or via Skype video-conference (7 cases). All of them took place between 2nd and 23rd October 2019, with each interview lasting about 40 minutes. In each session, the interviewer gradually developed the discussion, first probing the participant’s demographic and technology usage background. The first 15 minutes of the interviews were adjusted to make the participants speculate about their recent experience of social conversations and interaction with chatbot technology respectively. This part helped the participants to draw the parallels between their human-human and human-machine communication experience. In the following core part of the sessions, the interviewees were spurred to reflect on what they expect from natural conversations with emotionally aware chatbots. This part of the discussion lasted for 25 minutes on average. All interviews but one were audio-recorded, with the participants’ consent, and all were accompanied with hand-written notes either by the interviewer or interviewer’s colleague.
3.4 Data Analysis
We used Affinity diagramming [scupin1997kj] to analyze the interview content. Primarily, the first author enriched the hand-written interview notes with missing comments and observations from audio records. Meaningful quotations from the participants as well as the researcher’s remarks based on the material were prepared as affinity notes. During the preliminary analysis, the first author clustered all resulting affinity notes according to emerging themes and validated the result with the second author. Three large themes describing participants’ expectations of chatbots arose: naturalness, concerns, and application domains (see Fig. 1 (a)). The concept of emotion comprised a substantial part of naturalness and was also present in the other two themes. Overall, 400 affinity notes related to the concept of emotional awareness in chatbots, which accounted for over half of all affinity notes in the initial diagram. Further, we examined these notes more closely. Specifically, emotion-related notes were distributed into sub-clusters, whose content was summarized with one representative sentence. The sub-clusters, in turn, were grouped under top-level categories. The resulting affinity diagram, concerning emotional awareness of chatbots, was reviewed together with the second author and refined to reach its final version (see Fig. 1 (b)), which is represented in the findings below.
4.1 Expectations of Emotional Intelligence in Chatbots
All participants of our study agreed that enabling more human-like behavior for the conversational agents could facilitate the interaction. Sixteen out of 18 interviewees expressed varying degrees of interest in chatbots with enhanced emotional capabilities. Seven participants felt highly enthusiastic about such agents, and the remaining 9 showed moderate excitement. Their expectations largely complied with an established notion of emotional intelligence, which includes: self-awareness, self-regulation, motivation, empathy, and social skills [goleman1996emotional, goleman2009working]. As self-awareness and motivation rather refer to subjects that are endowed with consciousness, people attributed the other three qualities – empathy, social skills, and self-regulation – to their desired artificial conversational agents.
Empathy is our ability to sense the feelings and emotions of others, take their perspective, understand their needs and concerns [goleman2009working]. When describing their expectations of chatbot’s emotional behavior, the participants highlighted two main components: recognition of the speaker’s emotional state and expression of emotion in accordance with the context. The principle desire was to feel understood by the chat agent and receive appropriate responses. As noted by U04: “It needs to sound as if it has emotions, not only one emotion for all times. For example, it could be sad or happy or something like that: maybe, happy when you’re happy and understanding when you’re sad.”
In addition to a straightforward way to treat the speaker’s emotion by explicitly referencing the feeling (e.g. “I see that you are frustrated.”), a number of other more subtle approaches were discussed during the interviews. Several participants mentioned interjections, “phases that people have in a usual talk, like "am", "ah", "seems to be", "you know…"” (U09), as a way to express reactions, emotional states, and thought processes. Emojis and emoticons were also referenced a remarkable way of revealing emotion in chat. For example, U17 commented: “I use them sometimes to convey the atmosphere of "smiling conversation".”
4.1.2 Social skills
Social skills concern the way how we manage relationships with others. These include a broad range of competencies from knowing how to communicate smoothly and managing conflicts to cooperating and bonding with people [goleman2009working]. Speculating about their potential interaction, younger participants (below 30 years old) tended to be more open-minded about the social aspect of chatbots in everyday life. They enjoyed the idea of a conversational agent that could convey emotions during the dialog and presumed they would treat it as a friend. Interviewees felt excited about the possibility to engage with chatbots and share their feelings especially when they feel bored, lonely, or lacking motivation, as exemplified by the quote from U11: “Some people have only one person they are close to, so they might need another one. So for them, the [emotionally intelligent chatbot] would be very useful: not to feel alone and to actually feel like they are talking to someone and sharing something.”
Meanwhile, both younger and older participants expressed interest in social skills for task-oriented chatbots. From their perspective, it could improve their current experience in several domains by ensuring more appropriate responses and alleviating the embarrassment from talking to a new person.
In relation to chatbots, the most frequently mentioned principles of self-regulation included trustworthiness and adaptability. The recurrent topic reflecting anticipated interaction development with the chatbot concerned “familiarity level”
(U06) with the user. Several participants commented that receiving overly positive replies from someone barely known would seem odd and awkward. Similar to relationship development with a newly met person, participants expected the chatbot to consider personal boundaries and gradually adjust to their style, motives, and language. Participant U06 pointed her concern about appropriate conversational style and importance of social chitchat for her:“Maybe it’s different for my generation, but when I write an email or a message on WhatsApp, I always say ‘Bonjour …’ and some greetings. I think this is quite important.” Participant U08 further supported the idea with another example from her personal experience: “I really like that some software, it tries to learn my language…it will predict what I would like to say in a way I personally say. So, it adapts to my style.”
Depending on the participant’s needs and attitude towards the natural language agents, some of them preferred the interaction to follow a more formal style. In contrast, others expected it to develop informally, similar to the way of communication with their friends. For example, U17 welcomed the idea to develop a more close relationship with the chatbot: “For me, it would be an amazing idea to have a kind of an online personal friend. So, you always share some thoughts with your friend, but this one can be both your diary and at the same time a psychologist who can always listen to you.” By comparison, U14 preferred more formal communication: “…sometimes I find the service may be too cold. But, for example, when I was in the US for a bit, it was extremely warm and welcoming, to the point that I found it intrusive. So, yeah, I’d say it should be polite and understanding the problem I’m facing.”
4.2 Chatbots in the Role of a Friend
In our study, 10 out of 18 participants discussed the possibility to develop a friendship with a conversational agent in case it could demonstrate sufficient qualities of emotional awareness. They agreed that the chatbot should adjust to the user’s emotional state, also taking its prior knowledge of the user into consideration, if possible. While it suggests a personalized approach, the participants concurrently described a number of emotional interaction patterns expected from the agent. The patterns mainly reflected the desired chatbot’s responses to basic human emotions [robinson2008brain], such as happiness, sadness, or anger, and several more complex interactions. We summarize these expected patterns in Table 1 and consider them in greater detail below.
During the analysis, we observed that male participants tended to comment more on the playful and entertaining interaction aspects, while female interviewees mostly emphasized chatbot’s supporting abilities. Overall, the participants expected it to share their joyful moments,“ask what happened” (U17), and “be happy with them” (U02). In times of trouble, when feeling lonely or sad, the participants would anticipate understanding and compassion from the chatbot. U02 summarized these expectations as follows: “I guess, if you’re adding some excitement or frustration, then she [emotionally intelligent chatbot] should either be happy with you or try to make the voice more comforting.” Importantly, our participants would like chatbots to “provide feedback, but not just generic” (U16).
In some cases, potential users would desire the conversational agent to express coaching and motivational qualities. According to them, chatbots should encourage users “to keep going” (U07) both literally, promoting more physical activity and helping to establish a healthy lifestyle, and figuratively, supporting them when dealing with everyday problems. U05 would appreciate if a chatbot could assist him with behavior change: “It would be good if it acts as a coach who helps you avoid a bad habit or encourage you to exercise.” Several other participants would like chatbots to “educate users to manage their anger” (U01): “Maybe for me, a bot should calm you down when you’re angry. [It should] say, "Stop, I cannot talk with you like that. If you don’t calm down, I will turn off."” (U03). Turning to chatbots to get inspiration and reassurance was another recurrently discussed topic: “…if you have to spend long hours there, alone, doing some experiments, then it can make a conversation with you, cheer you up, look at your problems, maybe give some advice. It’s a kind of a colleague that you might not have” (U02).
Aligned with previous findings [Zamora2017sorry, brandtzaeg2017people], our participants expressed eagerness to share their frustration and negative thoughts with the chatbot due to the non-judging nature of such interaction. They found it appealing to have someone always available to validate their anxiety and stress without condemning the users. As spotted by U18: “If it’s very natural, it can also be in the consulting domain…Consulting – sometimes emotionally, sometimes professionally, like therapy.” Curiously, just having an empathetic listener to vent out was not sufficient. From the participants’ perspective, the crucial part of this interaction scenario was to receive some non-generic feedback from the chatbot, either advising the user how to overcome the problem or helping them to take their mind off by “starting another topic [for conversation]” (U04).
|Input emotion||Response emotion|
4.3 Emotionally Aware Chatbots in Targeted Domains
According to previous works on task-oriented chatbots, users try to engage into social chitchat with them, even though these agents were originally designed to operate in a limited target domain [Kopp2005max, Liao2018allwork, Yan2017]. For example, [Yan2017] reported that nearly 80 % of user utterances to the online shopping bot were chitchat queries. Since almost all of our study participants have had previous experience of interaction with this type of agents, they soon delved into discussing the emotional awareness of these chatbots. Many interviewees took positive attitude towards emotionally aware chatbots for customer service, health care, and educational domains. They expected that chatbots could potentially eliminate issues caused by human factors: computer agents are not subject to stress and tiredness and could always offer comforting advice to the client. In the case of customer service it could ensure “more natural and pleasant” responses, so that “people would actually want to call customer service instead of googling their problem” (U11). For medical advice, several participants anticipated responses from the chatbot to be more attentive than the ones from “an over-worked, over-stressed doctor” (U15).
For the area of educational and professional training, several participants pointed out that conversational agents could make the services more available along with expressing higher involvement and interest into the tutoring sessions. Both for health care and educational domains some interviewees mentioned the clients might feel less embarrassed to share their questions with a chatbot than with an unknown person. For example, U14 mentioned: “I guess, for some medical issues people may be shy to actually talk to a real doctor…So, for this case it [emotionally intelligent chatbot] could be quite good” (U14).
4.4 Three Pillars of User Concerns
In line with previous studies [Luger2016badpa, Zamora2017sorry, cowan_what_2017], the main factors causing user worries around conversational agents were uncertainty about trustworthiness and reliability of the system, as well as the risk of private information exposure. Chatbot’s ability to treat emotions provoked several additional topics that disturbed our interview participants. During the analysis, we identified three major categories that describe user concerns about chatbots: monetary harm, social harm, and psychological harm.
4.4.1 Monetary harm
Predictably, financial damage primarily involved the risks around the participants’ immediate personal means, such as bank accounts or social security numbers. People also felt apprehensive about the threat to employment opportunities in case the technology reaches sufficiently natural conversational abilities. Potential emotional awareness of chatbots further increased these concerns as people feared that for intruders, “it would be easier to influence you with emotion” (U05).
4.4.2 Social harm
Concerns about the consequences for the social status of the users developed around the risks of sensitive information misuse by the chatbot operators. People questioned how the information they share with the agents would be stored and whether the third-parties could use it. They worried that in case of disclosure, some pieces of data might be used against themselves and cause social embarrassment. Participant U11 questioned: “What if it remembers something you shouldn’t have said?” Participant U14 further echoed her worry: “If there’s anything linked to some kind of psychology, I would be very scared of what is being kept [by the chatbot], because in the future you can be considered unbalanced, or whatever.”
Several participants also felt wary of the possibly addictive effect of highly human-like conversational agents. Similarly to the way how excessive smartphone usage negatively affects our social relations [Genc2018], they concerned that users might get too obsessed with flawless “virtual friends” (U10) and isolate themselves from real human society. Participant U02 found this especially alarming for children: “I wouldn’t want children to use this technology, for them not to get used to talking to a computer all the time instead of real people.”
4.4.3 Psychological harm
Sometimes people develop an emotional attachment to objects and may experience anxiety and other negative emotions when facing a risk of losing these items [yap2019unpacking]. Our participants mentioned that people would highly likely establish an affective connection with emotionally aware chatbots. In this case, a technical glitch or agent’s discontinuation could cause strong user distress: “If some system or electricity failure happens, and the system gets reset, a person might not understand why it cannot remember anything anymore and feel very upset” (U02).
Another thought-provoking point arose from people’s experience with existing media resources. Several participants noted that some media adapts to personal interests of its users and focuses all the suggested content around them, possibly depriving the alternative views or unintentionally hiding “the best option” (U14) from the user. It may deceive the users leading them to get trapped in “their bubble” (U16), believing that everyone around adheres to the same beliefs. Some of our participants concerned that, given their anticipated personalization features, artificial conversational agents may further exacerbate this problem and cause psychological discomfort for the users. Participant U07 exemplified it with a personal anecdote: “I am also very worried …about the control the media has to shape my thinking, especially on Facebook. …It shows me posts that have the same point of view as other posts that I’ve read. I might read posts of some political area and then it will show me lots of similar posts. So, I might gradually start thinking that that’s the only point of view.”
This study has investigated user expectations and concerns about interaction with emotionally aware chat agents and revealed a number of insights to consider for chatbot development. Below, we list 5 essential design implications resulting from our findings and further research opportunities.
5.1 Design Implications
5.1.1 Endow chatbots with emotionality to enhance likability
Emotions form an essential part of human conversation. We use them in our daily chats with friends, family members, colleagues, retail assistants, and others. Emotional cues help us communicate our ideas clearer, share experiences, and form relationships. Our research findings demonstrated that users equally desire emotional awareness from chatbots to make the interaction more natural. Enabling conversational agents with emotional intelligence can help designers ensure more pleasant user experience. This complies with the Media Equation theory [reeves1996media], suggesting that people apply rules and conventions of social human interaction to computers. According to Reeves and Nass, users appear considerably more positive about the computer system and their interaction experience with it if the computer exhibits human-like qualities, for example being polite, cooperative, or showing personality traits. People perceive such technology as friendlier and more supportive and feel more comfortable with it.
5.1.2 Mirror positive, but carefully treat negative emotions
According to observations from social psychology, during communication people tend to mimic each other’s emotional states [Stevanovic2015]. Users expressed expectations for chatbots to follow the same rule when treating positive emotions, for example, to share and promote user happiness. In contrast, when experiencing negative feelings, people would prefer the agent to act more intelligently than simply mirroring the input. Designers should enable the agents with abilities to demonstrate attention and meaningful support to help users overcome negative sentiments. People would more eagerly engage with the chatbots that demonstrate empathy and provide non-generic feedback to the users.
5.1.3 Use implicit language markers to react to and express emotions
Chatbots designers should consider a variety of ways to enable chatbots to establish empathetic behavior by demonstrating and treating emotions. Participants of our study brought up two alternatives to expressing emotions through words: interjections and emojis. This agrees with previous studies, where authors showed that conversational agents employing interjections and filler words were perceived as more natural and engaging by the users [marge2010towards, cohn2019large]. Emotive interjections [Goddard2014] could enable chatbots to validate user emotions in a subtle and realistic manner, for example by saying Wow! to express a positive surprise or Yikes! to confirm their awareness of something bad and unexpected. Emojis and emoticons can also be employed by chatbots to express emotion and regulate the interaction, similarly to how people use them in computer-mediated communication between each other [derks2008emoticons].
5.1.4 Align with user style and language
In human dialogs, people tend to converge on linguistic behavior and word choices to achieve successful and favorable communication [Branigan2010, Pickering2004]. According to user expectations for chatbots’ self-regulation, the same principle manifests for this type of human-computer interaction. Chatbot designers should ensure that the agents adapt to their users by learning their profile and utilizing preferred user vocabulary and conversational style. This is in line with several earlier findings [Branigan2010, Thomas2018], suggesting that linguistic alignment by computers with users both in terms of word choices, i.e what things are said, and response style, i.e. how these things are said, can promote user positive feelings towards the computer and make communication more engaging.
5.1.5 Maintain curiosity to sustain engagement
In a social conversation, curiosity provokes active listening and responding behavior, which represents a premise for pleasant interactions [KASHDAN2006140, davis1982determinants]. Our study illustrated that people would value their interaction with chatbots that demonstrate interest and involvement into user preoccupations. By being curious about the user and proactively asking for clarifications, chatbots can prove their attentiveness and understand better the issues faced by the user, ensuring more appropriate responses. At the same time, people themselves have a natural passion for learning and gaining new knowledge and understanding of things [loewenstein1994psychology]. As reflected in our findings, the impossibility to understand how chatbots operate and treat the information shared by the users caused considerable concerns. Similarly, people worried that artificial agents would limit the informational content that might potentially interest the users or let them learn about alternatives. These issues could arise if chatbots fail to satisfy user curiosity. By providing convincing responses to user why? and how? questions, conversational agents can explain themselves better and enhance user trust to the technology.
5.2 Limitations and Future Work
Given the actively progressing research and developmental efforts to enhance the emotional capabilities of conversational agents, this work aimed to understand how people envision their interaction with this technology to help designers and developers create improved user experience. We observed several gender- and age-related trends, but given the limited population sample, we cannot claim any strong dependencies. Future work can investigate this subject closer by surveying a larger number of participants. In addition, our findings revealed the broadly expected emotional interaction patterns that stood out most prominently in the interview analysis. With this respect, follow-up research could extend the established principles both by considering more fine-grained emotional categories and providing insights on how the emotional flow should develop through several conversational turns. Another direction for further research could consider how emotionality in chatbots can promote user social and emotional well-being.
In our study, we took the first step towards understanding user expectations and concerns about emotionally intelligent chat agents based on semi-structured interviews with 18 participants of diverse backgrounds. The findings revealed that most participants anticipate chatbots with integrated emotional and social skills to be more likable, attentive, and pleasant to interact with. We described how users desire emotional interaction with these chatbots to develop to fulfill their emotional needs and what application domains they foresee as the most beneficial for this technology. We also identified the major factors causing user concerns around emotional agents. The insights originating from user interviews further provided 5 design guidelines helpful for the development of emotionally intelligent conversational agents and several inspiring directions for future research.
This project has received funding from the Swiss National Science Foundation (Grant No. 200021_184602). The authors also express gratitude to all the participants for sharing their ideas, and to Anuradha Welivita Kalpani, Kavous Salehzadeh Niksirat, and Yubo Xie for assisting the note-taking process during the interviews.