Animo: Sharing Biosignals on a Smartwatch for Lightweight Social Connection

04/12/2019 ∙ by Fannie Liu, et al. ∙ Carnegie Mellon University Snap Inc. 0

We present Animo, a smartwatch app that enables people to share and view each other's biosignals. We designed and engineered Animo to explore new ground for smartwatch-based biosignals social computing systems: identifying opportunities where these systems can support lightweight and mood-centric interactions. In our work we develop, explore, and evaluate several innovative features designed for dyadic communication of heart rate. We discuss the results of a two-week study (N=34), including new communication patterns participants engaged in, and outline the design landscape for communicating with biosignals on smartwatches.



There are no comments yet.


This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. User experience of hypothetical dyad: Staci (left) and Mario (right) interacting via Animo.

New forms of technology enable us to share a variety of personal data with each other–including our own heart rate. By wearing watches equipped with optical sensors, people can record and share their heart rate on their phones or computers while going for a run or streaming their gameplay (Robinson et al., 2016; Curmi et al., 2013). Recent research shows that sharing biosignals like our heart rate can provide social cues about our emotions or activities. By using these expressive biosignals to convey our internal experiences, we can potentially become more aware and understanding of each other (Liu et al., 2017b; Howell et al., 2016; Slovák et al., 2012; Hassib et al., 2017).

While research suggests that expressive biosignals can facilitate interpersonal communication, integrating biosignals seamlessly into communication remains a challenge. Given their novelty as a cue, expressive biosignals face issues in interpretation, cognitive load, and privacy (Liu et al., 2017a, b). To address these issues and further explore the design space of expressive biosignals, we propose a new way to share biosignals: sharing them directly on smartwatches themselves.

Smartwatches could provide an unobtrusive and unique platform for sharing biosignals directly to another person. Many smartwatches already have built-in sensors that enable continuous monitoring of biosignals like heart rate (e.g., Apple Watch, Fitbit Versa, Mio SLICE, etc.), and thus do not require additional equipment to record the data. The form factor of a smartwatch could also afford intimate and vivid interpersonal communication. Being physically on the body, the device would be noticeable, easily accessible, and tangible–factors that can promote social presence and connectedness (Van Erp and Toet, 2015; IJsselsteijn et al., 2003). Moreover, exploring expressive biosignals on smartwatches would advance research on smartwatch communication, which suggests that beyond simply extending text or call notifications (Jeong et al., 2017), the smartwatch itself could offer a lightweight yet rich communication channel (Kim et al., 2016).

Our research is motivated by two questions. First, how does sharing biosignals on a smartwatch impact communication? Specifically, what kinds of communication does this afford and what patterns emerge? Second, how do people make sense of their own biosignals and develop that understanding with each other? Given that biosignals are often ambiguous (Howell et al., 2016; Liu et al., 2017a) and the smartwatch screen has limited space, we aim to pinpoint the information necessary for meaningful communication.

We examine these questions through the design and deployment of a smartwatch app, and reflect on the design landscape that this work uncovers.

The main contributions of this work are:

  • The design and implementation of Animo, a smartwatch app where people interact using biosignals.

  • A two-week user study of how 17 dyads used Animo and, consequently, interacted in novel ways.

  • An articulation of the design space for mood-centric social computing systems on smartwatches.

2. Related work

2.1. Communication on Smartwatches

With the rising popularity of smartwatches, more researchers are exploring their capabilities. Given their ubiquitous nature and built-in sensors (e.g., biosensors, accelerometer, GPS, etc.), many works use smartwatches to understand and learn human behaviors, especially for health monitoring (Amiri et al., 2017; Vaizman et al., 2017; Capodieci et al., 2018). In a similar vein, research regarding the smartwatch user experience shows that personal monitoring activities, such as fitness and activity tracking, are some of the most popular features of the watch (Statista, 2018; Pizza et al., 2016; Jeong et al., 2017).

Fewer studies have explored the social applications of smartwatches. Mobile communication typically occurs via phone calls and text messaging, which are difficult on a small screen on the wrist. Subsequently, many researchers are exploring better text entry for small-screen devices (Wong et al., 2018; Gong et al., 2018; Dunlop et al., 2014; Oney et al., 2013; Xu et al., 2015). However, we argue that smartwatches enable opportunities for communication less focused on text. In fact, much of communication online contains non-textual cues. Emojis are the most common, and are used for emotional information or for expressive and playful interactions (Derks et al., 2008; Zhou et al., 2017; Wiseman and Gould, 2018). People also share non-textual cues for their context, such as their location (Tang et al., 2010; Smith et al., 2005; Massa et al., 2015) or activity (Epstein et al., 2015; Rooksby et al., 2014). Even “one-click communication,” such as “liking” online social media posts, can provide diverse cues (e.g., support, agreement) (Scissors et al., 2016).

Smartwatches have the capability of communicating in a non-textual way. Kim and colleagues provide one example: using the Yo app as inspiration, they suggest conveying affect or location through the smartwatch. Using simple yet expressive imagery (e.g., kinetic typography) or built-in sensors (e.g., GPS), they describe the potential for rich single touch messaging on the smartwatch (Kim et al., 2016). In the present work, we build on this research by exploring the non-textual communicative abilities of the smartwatch, using biosignals.

Biosignals present an opportunity to explore novel and expressive communication cues afforded by the smartwatch. Most existing non-textual cues are easily accessible on platforms other than the smartwatch, such as smartphones or desktop computers. Biosignals, on the other hand, are more easily and unobtrusively accessed on the smartwatch than on other platforms. For example, smartphone apps that record heart rate require users to place their finger on the phone’s camera for measurement, while smartwatches with heart rate sensors can record heart rate passively and continuously, only requiring users to wear the watch. Moreover, while emojis are the most common non-textual cue, there are hundreds of emojis with continuously evolving definitions (Lu et al., 2016; Miller et al., 2016; Wiseman and Gould, 2018)

, which increases the cost of communication when selecting one on a watch. Biosignals represent our body’s immediate response to situations, and thus could provide more authentic and expressive cue by capturing the body’s underlying state at specific moments. In the following section, we describe research investigating the potential for biosignals to act as a social cue.

2.2. Sharing Biosignals

Like smartwatches, most prior work in biosignals target individuals, such as providing users with personal feedback on fitness, well-being, or social skills (Frey et al., 2018; Rubin et al., 2015; Pancardo et al., 2016; Griffiths et al., 2014; Anderson et al., 2013; Konstantinidis et al., 2009). However, some research has explored the potential for biosignals to support interpersonal communication, focusing on how they affect our emotions (Kreibig, 2010; Posner et al., 2005). For instance, researchers have built chat systems that utilize biosignals like brain activity to detect affect, and convey how a user is feeling through avatars (Kuber and Wright, 2013) or kinetic typography (Wang et al., 2004). More recent work has explored expressing feelings through displays of biosignals themselves. EmpaTalk and HeartChat are systems that include visual indicators of heart rate changes (e.g., graphs, colors, raw beats per minute) to help online chat partners communicate their emotions (Lee et al., 2014; Hassib et al., 2017). Tan and colleagues developed a system that displayed changes in heart rate, skin conductance, and respiration in order to reduce stress in worker/instructor-based collaboration (Tan et al., 2014). Researchers have labeled these displays as expressive biosignals, in which biosignals are used as social cues about someone’s emotional and/or cognitive states (Liu et al., 2017b, a).

A few studies have explored how expressive biosignals can affect social interactions. Janssen and colleagues, for instance, demonstrate that the sound of a heartbeat can increase feelings of intimacy and closeness (Janssen et al., 2010). Sharing heart rate can increase social connectedness, where it can become an emotional expression, a means to gain awareness about another person’s context, or simply a playful way to interact and feel present with another person (Liu et al., 2017b; Hassib et al., 2017; Slovák et al., 2012).

Most expressive biosignals systems are built as smartphone or desktop apps. Only a few systems exist on wearables, such as a helmet (Walmink et al., 2014) or shirt (Howell et al., 2016, 2018b). These systems focus on broadcasting the data to all viewers. On the other hand, smartwatches have a smaller form that could enable direct communication and can also be found with built-in biosignals sensors. For example, Apple Watch’s Digital Touch app allows users to directly share vibrations of their heart beats to another person. However, the Digital Touch does not monitor heart rate continuously, and thus requires users to spontaneously think about sharing with someone else, which is unlikely to happen given the unconventional nature of sharing heart beats. Thus, to understand and explore the design of expressive biosignals smartwatch systems that encourage sharing, we created our own app: Animo.

3. Animo system

3.1. Design

Animo is a smartwatch app where two people can send mood representations, or “animos” (lower case), to each other. We referenced existing frameworks for augmented mobile messaging systems (Buschek et al., 2018; Hassib et al., 2017) to inform the design of Animo. Though this prior work focused on augmenting text messaging for the phone, many of the same concepts can apply to non-textual smartwatch communication.

3.1.1. Content from Sender

Buschek and colleagues’ design space for augmented mobile messaging includes the “Sender Context” dimension, describing context as “information and cues beyond text” (Buschek et al., 2018) to be included with a text message. Animo does not use text; therefore, we describe its communication content. Since our focus is biosignals, heart rate is the content type. Heart rate sensors are a common feature of many smartwatches, and people already associate heart rate with different psychological states, such as mood (Slovák et al., 2012).

Previous studies demonstrate that simply showing heart rate as a raw value can limit expressiveness and appeal, and can be difficult to understand (Liu et al., 2017b; Buschek et al., 2018). Thus, inspired by the popularity of mood rings, we chose to represent heart rate as “mood,” to guide engagement and understanding.

The system is the content provider, where Animo shows a mood representation (animo) to a user; a user cannot choose the animo. We base mood loosely on the valence-arousal circumplex, which separates emotion into two dimensions: valence (positive/negative) and physiological arousal (high/low) (Posner et al., 2005). Given the constraints of information available on smartwatches, we focused on mood related to physiological arousal determined by users’ heart rate (e.g., excitement as a high-arousal mood vs. calmness as a low-arousal mood), and left valence open to users’ interpretations.

3.1.2. Sharing

Users can view their animo on their smartwatch, and tap on it to send it to their partner. Their partner, who must also be running Animo, will feel a subtle vibration on their watch when they receive the animo. The animo then “peeks” into the side of their watch screen. Tapping on it will play its animation. Given privacy concerns around sharing biosignals (Liu et al., 2017b, a), users can explicitly and sporadically share their animos as they please. To encourage sharing, the watch occasionally vibrates when their animo state changes.

3.1.3. Presentation Abstraction

We chose a high abstraction representation for mood derived from heart rate. To enhance expressiveness and playfulness, animos are animated shapes.

We designed different animos to cover a variety of moods. For some designs, we drew from elements of kinetic typography that were tested in Kim and colleague’s work on Yo (Kim et al., 2016). The animos varied in three ways:

  • Shape: In a dyad, one person has circle animos while the other has diamond animos. Shapes are assigned during onboarding and do not change.

  • Motion: We designed animos with different levels of energy to represent arousal. High energy motions (e.g., bouncing) represent higher heart rate; low energy motions (e.g., swaying) represent lower heart rate.

  • Color: As with mood rings, some degree of mystery can encourage playful discovery. Therefore, we designed animos to change colors semi-randomly in order to encourage users (who would not be aware of the randomness) to question and interpret their animos. High energy animos are randomly yellow or red, whereas low energy animos are randomly blue or green. Animos in between high and low energy levels are white. We chose these colors according to their existing associations with emotions (Kaya and Epps, 2004), loosely basing them on their relation to the valence dimension of mood but still leaving them up to interpretation.

We pretested the animos we designed on the crowdsourcing platform Amazon Mechanical Turk to ensure general agreement that they represented expected moods and their associated arousal levels. We tested a total of 26 different animos across three rounds of surveys111We ran multiple rounds of surveys in order to collect a diverse set of animos that would cover the quadrants of the valence-arousal circumplex. In between rounds, we created new animos when we did not have enough animos that performed well in certain quadrants (e.g., designing for the positive/low arousal quadrant was particularly challenging). In each round, 20 participants viewed a subset of animos presented in a randomized counter-balanced order and rated them on their “mood” and “energy” (Lee et al., 2006) (see supplemental materials for the survey questions). We selected the best performing animos per round to include in the Animo app, which led to a total of 18 animos (see supplemental materials for the pretest analysis and results, and a video of the selected animos). Four sets of three animos were chosen to cover different quadrants of the valence-arousal circumplex, where they differed significantly in mood and energy ratings (). We also included two animos that differed significantly in only energy ratings, in order to introduce some ambiguity that could spark different interpretations. Finally, we included four “neutral” animos that were not significantly high or low in mood or energy ratings.

3.1.4. Presentation Granularity

Animo is person-based, meaning each user has their own animo. Users can view their own animo on their smartwatch, and their partner’s if they send it to them. Users can have one, and only one, partner. We made this decision to focus on the simplest communication on a smartwatch—one-to-one. Animo is also message-based: each sent animo is based on a user’s current state.

3.1.5. Presentation Persistence

Sent animos are ephemeral: they disappear in 10 seconds if the receiver ignores them by not tapping on them. This emphasizes animos’ weightlessness, and aims to avoid having yet another feed to check. We chose 10 seconds based on early pilot tests we conducted to ensure that receivers would have enough time to notice and tap the animo if desired.

3.2. Implementation

We implemented Animo as a “clock face” app for the Fitbit Versa smartwatch, which allowed it to stay on the default screen of the smartwatch. We chose the Fitbit platform because of its compatibility with both Android and iOS, and its “mass appeal,” with over 25 million active users in 2017 (Fitbit, 2018; Palladino, 2018). The Fitbit Versa has an LCD touchscreen, Bluetooth communication, and a heart rate sensor, among other sensors.

The Animo system is composed of a smartwatch app, a server-side app running on the cloud, and a “companion app” that runs within the Fitbit smartphone app (see Figure  2). The companion app enables communication between the Fitbit Versa and a user’s smartphone via Bluetooth, which is used when a user sends an animo.

The companion app sends the animo to the backend service, which routes the animo to the user’s partner. This alerts the receiver’s smartwatch that an animo is available, triggering a vibration. The animations are displayed only on the smartwatch and implemented using vector graphics. Animo requires the sender and receiver to both have the Fitbit Versa and the Fitbit app running on their phone with Bluetooth and data connection on.

Figure 2. Components of the Animo system.

4. Methods

To test Animo in situ, we deployed the app in a two-week field study, allowing participants to freely use the app in order to observe patterns of usage that naturally emerge.

4.1. Participants

dyads animos
participants genders relationship sent read (%) replied (%)
P1 P2 M M friends*
P3 P4 M M friends* 220 40 (18%) 18 (8%)
P5 P6 F F friends* 127 65 (51%) 25 (20%)
P7 P8 F F friends
P9 P10 M M friends 175 115 (66%) 59 (34%)
P11 P12 F F friends* 173 101 (58%) 43 (25%)
P13 P14 M F significant others 258 77 (30%) 66 (26%)
P15 P16 M F spouses 68 34 (50%) 7 (10%)
P17 P18 F M significant others 45 14 (31%) 6 (17%)
P19 P20 F F friends
P21 P22 M F significant others 210 113 (54%) 60 (29%)
P23 P24 M F spouses 77 35 (45%) 16 (21%)
P25 P26 F F roommates 108 43 (40%) 6 (6%)
P27 P28 F M spouses 33 3 (9%) 2 (6%)
P31 P32 F F friends* 168 87 (52%) 39 (23%)
P33 P34 M M coworkers 375 159 (42%) 35 (9%)
P35 P36 M F spouses 181 43 (24%) 25 (14%)
P37 P38 M F spouses 115 49 (43%) 29 (25%)
P39 P40 F F roommates 90 27 (30%) 8 (9%)
P41 P42 F M coworkers 67 35 (52%) 4 (6%)
Table 1. Participant dyads. Includes drop outs (). Some friends (*) reported also being coworkers.

We recruited 20 dyads, or 40 participants. We removed data from three dyads, leaving a total of 17 dyads (see Table  1). Participants were removed either because they experienced major technical issues (e.g., loss of connection between their phone and the smartwatch), or because they were traveling without connectivity for a majority of the study.

We recruited participants through the mailing lists of a technology company, inviting people to participate in a two-week experiment about “mood.” We did not pay participants. We asked participants to choose a partner to join them in the study. This partner did not have to be affiliated with the company. In order to have a diverse sample, we recruited participants from three different offices in the United States: New York City, Seattle, and Los Angeles.

Participants varied in their backgrounds and demographics. Their occupations included homemaker, professional server, program manager, business recruiter, software engineer, neuroscientist, and others. Their ages ranged from 19 to 48 years old (Mage = 30.4 years, SDage = 6.0 years). Participants’ gender and relationship breakdown can be seen in Table 1. Sixteen participants identified as Asian, 12 as White/Caucasian, two as Hispanic, and three as mixed White/Caucasian and Asian or White/Caucasian and Hispanic. Fifteen participants owned a smartwatch (e.g., Apple Watch, Google Wear OS), with nine of them using it frequently for activity tracking, phone notifications, or heart rate monitoring. Participants’ prior usage of smartwatches did not influence our results, therefore we included all participants who owned smartwatches in our final analyses.

4.2. Procedure

4.2.1. Onboarding

Each dyad was onboarded together in one of the offices of the technology company. Participants first created a Fitbit account and added each other as friends on Fitbit, then individually completed a questionnaire to describe their backgrounds (see supplemental materials for the questionnaire).

Next, experimenters equipped participants with a smartwatch, and took their heart rates during a calming task (individually watch a breathing exercise video222Video available at and during a stressful task (count down from 1022 in steps of 13 (Birkett, 2011) in front of their partner and the experimenters). We used the average of the heart rates recorded during each task to determine animo arousal, i.e., high and low heart rate baselines.

After recording the heart rates, experimenters explained to participants how to use Animo, including how to send and view received animos. Experimenters purposefully did not explain the meaning behind the different animos, and instead instructed participants to interpret the animos themselves. We made this decision to inform our research questions, allowing participants to flexibly define the animos to understand how they would create those definitions. Additionally, prior work highlights the importance of allowing people to create meaning together from their biosignals (Liu et al., 2017b). Finally, participants could leave and use Animo freely for two weeks.

4.2.2. Animo Usage

We recorded a variety of data to capture participants’ Animo usage throughout the study. This included heart rate data, animo states, and animos sent, received, viewed, and sent as responses. Additionally, inspired by diary studies (Ohly et al., 2010), we sent brief daily surveys about their usage (see supplemental materials for the survey). After one week, participants completed a mid-study survey to clarify responses in their daily surveys and provide initial thoughts on Animo.

4.2.3. Offboarding

After two weeks, participants returned their watches and were individually interviewed.

The interview was semi-structured and elicited participants’ thoughts and feedback on their experiences with Animo (see supplemental materials).

To help participants recall their experiences, we showed them the five animos they sent the most and the five animos received the most.

In the feedback section, we also showed example sketches and mock-ups for future Animo designs, to probe participants on specific aspects of Animo that they enjoyed or wanted to improve (e.g., what animos looked like, how often they should be sent).

4.3. Analysis

We analyzed the responses to the daily surveys, mid-study surveys, and transcriptions of the audio-recorded exit interviews, using participants as the unit of analysis. Two researchers independently performed open-coding to label responses from a random sample of participants. They developed codes according to similarities in participants’ overall usage of Animo, experiences sending/receiving animos and subsequent reactions, process of understanding the animos, and feedback for Animo. The researchers met frequently to discuss these codes and create a codebook. Once they agreed on the codebook, one researcher used the codebook to code the rest of the participants. Next, we grouped related codes together and formed themes around participants’ communication patterns and how their understanding of the animos affected those patterns and their attitudes towards Animo. During the writing process of the paper, we refined the themes around our main research questions.

5. Results

Participants used Animo frequently throughout the study, averaging five animos sent per user per day despite not being required to do so and not receiving compensation for participating. Four participants even continued using Animo after completing the study. Across all participants, a total of 2,490 animos were sent, and 1,040 (41%) were read (participants received their partner’s animo, and tapped on it to view the animation). Of the animos read, 43% received a reply on average (participants sent an animo back within 10 minutes after reading one). Of the animos sent, 5.6% were lost due to connectivity issues, such as unstable Bluetooth connection between users’ phones and the smartwatch or participants traveling to areas with limited network connectivity.

In the following, we describe our results from participants’ responses to the surveys and interviews, providing a richer view into the reasons behind participants’ Animo usage. We detail the common themes that emerged in our results, and within those themes, include interesting examples of Animo usage that participants shared.

5.1. Connecting in New Moments

Animo’s design allowed participants to more easily stay connected with each other. Its convenience and constant physical access afforded communication when they typically did not or were not able to communicate with their partner.

5.1.1. Seeing Animo is a Reminder to Communicate

Participants found that looking at their watch reminded them of their partner, and made their partner more salient. Since the smartwatch is easily accessible on their wrist, participants only needed to glance down to see their animo. They found themselves doing so not only by haptic prompting (i.e., the watch vibrating when the animo state changes or when receiving an animo), but when they were bored, had “down time,” or simply wanted to check the time.

For instance, P22 was traveling in a different city than P21, her significant other, during the first week of the study. She noted that while she was away from P21, the “watch represented a connection to [him]”, and “increased communication when there wouldn’t normally have been communication”:

“…because, like,the thing on my wrist was [him]. It, like, reminded me that, like, there was a prompt to communicate.” —P22

Looking at Animo thus prompted participants to think of their partner, and sending animos let their partner know that. As a result, participants described feeling happy and nice whenever they received their partner’s animo:

“It’s, like, kind of like getting a ‘like’ on Twitter or on Instagram. You’re, like, ‘Oh, somebody thought about me!’ And they’re not really thinking about you, but they’re trying to, like, show you your existence. I like that you’re there.” —P11

5.1.2. Communication is Convenient through Animo

Participants also used Animo when they were already thinking of their partner but unable to communicate through other means. Tapping on their watch was more convenient than “patiently typing a message” (P24) on the phone or “[having] a computer ready to go” (P15). Animo sending occurred frequently when participants were too busy attending to something else to communicate otherwise. For example, even when P22 was busy traveling, her partner (P21) felt he could easily keep in touch:

“I think she was out with her… parents and not necessarily able to text back and forth, but you could still send each other animos…” —P21

Similarly, participants felt that they were able to send animos during their busy work day, when they typically were not physically co-located with their partners. Our data on Animo usage supports this (Figure 3), with the highest number of animos sent and read during work days (Monday-Friday) and work hours (9am-6pm).

Since tapping on a watch is less noticeable than taking out a phone or speaking, Animo provided a private communication channel while in public. For example, when P5 and P6 were in a stressful work meeting together with other colleagues, P6 used Animo to communicate with P5 whenever she thought they would both be annoyed. She described sending animos to let P5 know she was “thinking about her without being obvious in [the] meeting.”

Our results suggest that Animo may enhance the salience of partners and increase communication. Animo’s unobtrusiveness and ease of interaction on the smartwatch allowed participants to connect in moments they were not together, were too busy, or needed privacy in a public space.

5.2. Creating New Understanding

Through Animo, participants felt they could elicit new information from their partners about their states. Receiving animos gave participants insight on their partner’s state and opened up new pathways for them to discuss and understand each other’s state. This insight aligns with prior research on sharing heart rate (Liu et al., 2017b; Hassib et al., 2017), where participants felt that shared biosignals functioned as an emotional expression or status update. Our work extends this prior work by detailing how status awareness develops through shared biosignals.

Figure 3. Usage patterns by time of day. Working hours are highlighted in gray.

5.2.1. Animo can Start Conversations about Your Day

Animo triggered participants to start new conversations with each other, where they checked in with each other based on the animos they saw throughout the day. For example, P13 and P14, who were in a long-distance relationship and only saw each other every few weeks, used animos as a conversation starter while they were apart:

“She’d get one that was white. She’s like, ’You’re relaxing right now?’ I said, ’Yeah, I’m at home right now.’ I was just reading and it prompted other conversation…to check in with each other and see what we’re up to.” —P13

Even when participants were too busy to have these kinds of conversations during the day, the animos provided a “tapestry” or “glimpse” (P18) of their day. Participants then talked about the animos with their partners when they later saw them face-to-face. This was especially common for participants that lived together (i.e., significant others and roommates). For example, P25 and P26 were roommates who usually briefed each other on the upcoming events of their day before they left for work. P25 once sent an animo to express that she was stressed before an interview. P26 described following-up on that animo:

“[W]hen we saw each other in the evening, uh, we would like mention that we had sent them…so I like asked if she was in the interview when she sent it to me so we talked about that. So it just kind of prompted discussions about our day.” —P26

When participants received an animo that triggered their concern, they would start conversations immediately. This occurred when participants thought their partner was stressed. For example, P14 learned that her partner had a frustrating experience with his car. Soon after hearing about it, she received a red animo (which she interpreted as stress), and called her partner immediately because she was worried.

While Animo helped start conversations, participants felt it could not support full conversations on its own. For instance, when participants received animos, sending an animo back only represented an acknowledgement of receipt. P15 felt that Animo acted as a “first level of communication,” to start conversation, and desired a way to easily access or inform the next level of communication, where conversations could actually take place:

“[If] I see this yellow jumping dot, it probably means that you know my wife wants to talk about something fun and then oh let me try to call her. So, if there was something, like, oh whenever I share this I want my wife to be able to call me back…. If there was some way to establish this…second level communication would be nice.”


5.2.2. Animo can Clarify Ambiguous Conversations

In a less common yet interesting case, Animo helped participants understand their partner’s feelings when they had difficulties interpreting their behavior. P42, who had a coworker/work-friend relationship with P41, used Animo to validate his thoughts on how P41 felt. After an in-person conversation with P41 where she seemed “riled up,” P42 checked the animos she sent:

“It was interesting to see if…one of us was more upset about something…it was good to know if this person was actually worked up or if this was a show, like a front.” —P42

Though P42’s experience was a unique example in our results, it reveals Animo’s potential to prompt deeper understanding between people by starting new kinds of conversations and clarifying ambiguous feelings. This finding supports and extends results from prior research showing that shared heart rate can provide emotional status cues and open up communication about those cues (Liu et al., 2017b).

5.3. Navigating Open-Ended Interpretation

The Animo system provides an abstracted representation of sensed mood. Given the level of ambiguity we imbued in the animos (e.g., random colors) and the lack of provided definitions, participants’ understanding of that abstraction varied.

We found that participants’ interpretations did not necessarily align with the results of our pretest of the animos–instead, participants situated animos in social contexts. Participants’ interpretations affected whether they found the animos meaningful, and determined their engagement with Animo. Below, we detail how Animo’s open-ended nature impacted participants’ ability to meaningfully communicate.

5.3.1. Playful Imagination and Discussion

Participants enjoyed having free reign to understand the animos for themselves. In particular, they appreciated that Animo did not necessarily tell them their mood, but encouraged them to reflect on their mood themselves. P15 compared this to emojis:

“I like that animos…looked unintentional in a way that they don’t necessarily imply something you know, whereas like an emoji definitely implies something. It doesn’t make me feel I’m being forced in feeling something in some way.” —P15

Participants found it fun to not only think about how their own animo could relate to their mood, but also imagine what their partner might be doing, and why they chose to send the animo they did. For example, P21 felt that because the animos were not always accurate, there was more meaning to the ones they selected:

“It’s just fun to receive them because…she looked at it and then she thought that that reflected her mood and then chose to send it. So, it feels meaningful because of that…she chose that one specifically.” —P21

Participants also enjoyed the process of decoding the animos with their partners. They noted that they became more aware of each other’s feelings, “not just with the animos but…in talking to each other” (P6). Even seeing less accurate animos spurred these discussions:

“I mean, like, sometimes we were just, like, laughing about it. I think, like, it allowed us to kind of just have conversations based on our feelings and how we were feeling in the moment.” —P37

5.3.2. Finding Mismatched Meanings

Though most participants found meaning in the animos, where they felt that their animos reflected or somewhat reflected their state, the meaning they gleaned did not always match the meaning their partner gleaned.

Missing a shared language.

Participants were not always able to have conversations to jointly reflect on animos with their partner. When this happened, they would be unable to determine whether they agreed on the meanings they attributed to the animos. Instead, they tended to reflect according to their own beliefs.

For example, P11 and P12 were close friends and coworkers who would send animos to each other both as a “poke” and to show that something interesting was happening. P11, who would send her animo to let P12 know what her mood was, stated, “I wanted her to know that it was matching my mood but, um, I don’t think she knew.” Indeed, P12, who believed the animos were more related to physical activity rather than mood, instead reflected on P11’s activity:

“So, like, for the white ones… she’s probably just lying in bed or sitting around somewhere doing something. For active ones.. .maybe she’s like jumping around or dancing or, like, walking about or exercising when she sent it.” —P12

Unintended interpretations.

Diverging opinions on the meaning of the animos sometimes led to unintended interpretations. For example, P42 viewed animos as representative of his mood–to the extent that he felt Animo was making him more aware of it. He similarly believed P41’s animos represented her mood, and thought she sent her animos to show him her mood. In actuality, P41 did not find meaning in the animos and sent them at random times. This suggests that an animo could reveal more information than a sender expects (recall that P42 used Animo to validate his thoughts on P41’s feelings), and that a receiver can interpret an animo in ways that stray from a sender’s intentions.

Giving animos context.

Some participants realized their partner might interpret their animos differently than intended. To counter this, they would send additional information through other communication channels to clarify animos they sent. For example, P38 sent his partner a red animo to show that he is excited. However, he recognized that red could be interpreted as an angry color; therefore, he sent his partner a video to give them more clarifying context.

5.3.3. Animo as Content-Less Notification

Ten participants could not find meaning in the animos, and subsequently felt Animo had limited communicative ability. These participants sent animos to simply connect and say “hi” to their partner.

Dyads who believed their animos lacked meaning quickly got bored of Animo and used it less. P39 and P40, for instance, were roommates who both felt frustrated with the system because it made no sense to them and did not seem to match their how they felt (e.g., seeing red animos they view as angry when they actually feel happy). They would send their animo even when it did not match their feelings because their partner “wouldn’t care, wouldn’t realize, or wouldn’t read into it” (P39). They instead sent animos randomly as a “hi.” Communicating in this way had limitations, as there are only so many times people will say “hi” back and forth:

“…I just did not want to keep it going. I was like uh, ‘hi,’ ‘hi’, and ‘hi’ is fine.” —P39

Overall, we found that Animo’s open-endedness had benefits and tradeoffs. Participants could decide what the animos meant to them, and reflect on those meanings as part of new conversations. However, not all participants were able to interpret animos, and even when they did, their interpretations did not always match their partner’s without a clarifying conversation. This suggests a need to convey the intention of messages sent in systems like Animo, while maintaining the value brought by their brevity.

6. Discussion

Our results show that Animo enabled lightweight social connection, where participants found it fun and easy to keep in touch and attuned to each other’s presence and state. However, Animo experienced some challenges in functioning as a full communication platform, due to its minimal and somewhat ambiguous nature. Repurposing and expanding Bushek and colleagues’ design space for augmented mobile messaging (Buschek et al., 2018), we discuss design implications for biosignals smartwatch communication systems based on opportunities and challenges we saw in Animo features. We summarize recommendations for this new design space in Table 2.

Insights on existing design dimensions
Dimension Recommendation Support from Animo
Content Provider:
System — User — Mixed
Combine system- and user-provided content, to help users create their own meaning from their biosignals rather than solely relying on the system. Developing meaning for animos gave users creative liberty rather than being “forced” into feeling a certain way based on what the system told them.
Presentation Abstraction:
Low — Med — High
For limited content, explore expressive yet simple representations that contain clear and distinct information. Expressing oneself through the open-ended animos was fun, but also inhibited developing a shared language because they could mean different things.
Person — Message
— Communication
Communication-based granularity may promote playfulness, connectedness, and shared meaning. Example: messages that jump around each other when communication is exciting. Animos were cute and playful, but primarily represented individual users’ moods, which users had to convey to their partner.
Suggested new design dimensions
Sharing Receiver:
One — Many
Single-receiver is more intimate, and watch becomes a reminder of that person. Multi-receiver allows for keeping in touch with more people. Having one partner made animos feel like a personal and private way to communicate. Some mentioned wanting multiple partners to check in on.
Response Richness:
Simple — Rich
Explore richer opportunities for receiver responses, beyond just sending a message back. Example: simple response could be an acknowledgement of message receipt; richer response could be in-app short text or animation. Just sending an animo back was limited. Participants wanted more unique and richer responses that are inherently tied to the original sent message.
Single — Multiple
Link the watch to other channels for deeper conversation. Watch can act as a lightweight first level of communication, e.g., initiating conversation or simply “being there.” Since Animo communication was lightweight, participants typically followed-up in later conversations in-person, through text, or on the phone. Participants wanted a way to more smoothly enter those conversations directly from the watch.
Table 2. Recommendations for the biosignals smartwatch communication design space. Adapted from (Buschek et al., 2018)

6.1. Content and Presentation: Being Expressive yet Interpretable

Bushek and colleagues describe a design space for augmenting text messaging with different types of context (Buschek et al., 2018). As a related but tangential form of messaging, biosignals smartwatch communication focuses on biosignals as the content of the message itself. Our results show that Animo promoted lightweight social connection through the content provider and its presentation to users, where some randomness and abstractness allowed users to have fun creating and discussing their own meanings together. At the same time, being too abstract and ambiguous can lead to challenges in finding meaning in the content.

6.1.1. Expressive Meaning-Making

Animo enabled participants to express themselves, even though the system intentionally provided them with a partially random mood representation. Participants found the process of interpretation fun and meaningful, both when they saw animos that did and did not reflect their mood. The ambiguity allowed them to be expressive by developing their “own vernacular” (P27), as opposed to using pre-existing ones. This supports recent research on approaches to emotional biosensing, which questions systems that use biosignals in attempt to determine and tell people how they feel according to predefined emotional categories (Howell et al., 2018a), and suggests potential for people to have meaningful discussions around biosignals instead (Liu et al., 2017b). In the same vein, our results demonstrate the importance of enabling people to reflect on their biosignals and create their own meanings for them, rather than a system simply telling them what they mean. Additionally, the playfulness and expressiveness of Animo supported collective meaning-making experiences for participants and their partners, which can ultimately facilitate communication and social connectedness  (Hsieh and Tseng, 2017; Xu et al., 2017).

6.1.2. Limitations of Abstraction

While participants enjoyed being expressive through the highly abstract animo designs, animos were sometimes difficult to interpret. We provided animos as the sole content to reinforce Animo’s lightweight nature; however, when that content was not understandable, communication became meaningless. Prior work suggests that communication that lacks content is limited: while it can contain a “symbolic” message (e.g., “hi”), it has less value and impact on relationships (Burke and Kraut, 2014). Based on our results, two factors appeared to affect the meaning of the content:

Cognitive effort

Decoding the information provided within the animos required more cognitive effort than expected. Though we had designed animos to convey mood through color and motion, most participants focused on only color. Using both color and motion to track animo states proved difficult when participants saw only one state at a time. Subsequently, they often assigned meaning to animos according to only one dimension of the valence-arousal circumplex (Posner et al., 2005) (e.g., red animos representing high heart rate, rather than negative high heart rate), which negatively impacted their ability to interpret mood from the animos.

Limited context

Participants were able to express themselves by creating their own meanings for the animos; however, they did not always converge on those meanings, causing them to miss the sender’s intentions. Participants lacked a shared language in which to communicate with the animos.

6.1.3. Recommendations for Content and Presentation

Content Provider

To promote expressiveness, designers should consider a “mixed” content provider with tools for users to subjectively interpret their biosignals, rather than solely focus efforts towards a system that accurately infers information from biosignals. Users could identify with a message more if they have the ability to define what it means to them, and receivers could likewise recognize that it has meaning for them.

Presentation Abstraction

High abstraction in presentation can encourage playfulness and expressiveness, but can be difficult to understand. For lightweight communication, simplifying the presentation is important. We recommend minimizing the number of changes a user needs to attend to. Adding small context cues could help distinguish between different meanings. For instance, participants suggested including short captions, such as “hi” or “good morning” (P14), as well as using activity- and emotion-specific animos (P24) to distinguish between biosignals affected by physical or emotional stimuli. We also recommend exploring non-visual language. For example, a single tap could represent a “hi,” while a long press or double-tap could represent a “heavier” event that requires response, such as something stressful.

Presentation Granularity

An animo represents the mood of an individual; thus, participants needed to convey how their animo reflected their mood to their partner. To encourage the development of meanings shared between partners, we recommend exploring communication-based333Unlike in (Buschek et al., 2018), we use “communication” instead of “conversation” since conversation may not necessarily occur through the smartwatch. presentation granularity, to represent how communication develops between two users. For instance, participants suggested exploring playful ways to highlight that users are connected through the system and “convey a sense of togetherness” (P9). This could be through animos interacting with each other (e.g., dancing) to show how users’ moods could interact. Other examples could include enhancing messages according to communication frequency, or leaving behind “gifts” that serve as a reminder of communication that occurred.

6.2. Sharing: “Being There”

Animo used the smartwatch as a sharing channel, and allowed participants to directly share their heart rate with each other. This enabled a new way for people to connect by physically sharing information from their body. Our results suggest that this form of sharing enhanced social presence between participants.

6.2.1. Social Presence

Social presence has various definitions, but for our purposes, we adopt Biocca and colleagues’ succinct definition: “the sense of being with another” (Biocca et al., 2003). Factors such as a lack of immediacy and intimacy can reduce social presence when people communicate through mediated channels (Gunawardena and Zittle, 1997; Gunawardena, 1995). Promoting social presence should promote feelings of connectedness (IJsselsteijn et al., 2003).

Past research shows that even with minimal communication content, people can feel as if the other person is present (Kaye et al., 2005; Chang et al., 2001). Participants felt that their partners were “there” on their wrist, especially when they were not physically together. This stemmed from several factors:

Mediated touch

The smartwatch is worn on the wrist. When a user receives an animo, they can physically feel it through haptic feedback. Participants mentioned using Animo as a “poke”; haptic feedback can make them feel like they are being poked. Our results suggest the haptic features of the smartwatch in communication can act as a “mediated social touch,” which can increase social presence (Van Erp and Toet, 2015; Beelen et al., 2013).


Immediacy is already valued in smartwatches: users enjoy notifications for their ability to quickly check incoming calls and messages (Jeong et al., 2017). Animo leverages this immediacy for two-way communication: participants found that their partners were more immediately accessible to them. They could easily initiate or respond to communication by tapping on the watch to send or recieve an animo, without reaching for their phone or computer, or typing a message out. Animo’s form factor and simple design made it “effortless” (P16) to use, helping participants communicate more frequently, even during busy times.


In addition to the affordances of the smartwatch, Animo’s sharing design affords intimate communication between close-ties, such as significant others and close friends. Specifically, Animo is one-to-one. As P11 mentioned, this makes communicating feel “more personal” because only one other person can send animos; they act like a “hidden message” (P27). Moreover, the message sent is “mood” derived from heart rate, personal and private information that can increase feelings of intimacy and vulnerability when shared (Janssen et al., 2010; Liu et al., 2017b; Slovák et al., 2012). Similar to prior work (Hassib et al., 2017; Liu et al., 2017a), participants noted they would prefer to share Animos with their closest contacts. P25 mentioned that she had only known her partner for a few months, and would have liked to participate with her best friend instead.

6.2.2. Recommendations for Sharing

We suggest a new design dimension in biosignals smartwatch communication: sharing receiver. Animo allowed only one receiver, which created an intimate communication experience preferred by close partners. For an experience geared towards relationships with varying degrees of closeness, designers might consider multiple receivers for animos, which would allow senders to keep in touch with several people at once.

6.3. Response: Triggering Conversation

Similar to prior work, sending and receiving animos helped participants feel more aware of and connected with their partners (Liu et al., 2017b; Hassib et al., 2017). This supports research suggesting that keeping in touch with a partner’s activities can facilitate feelings of connectedness, especially when remote (Lottridge et al., 2009; Bales et al., 2011). At the same time, since Animo was so minimal, it was unable to go beyond these short status updates to support full conversations between partners. Instead, participants used the animos they saw as opportunities to converse with each other through other channels, such as through text when they received animos or face-to-face at later times when they physically met.

Our results suggest that systems like Animo are best used as conversation starters, rather than platforms for conversation. For a seamless communication experience, future work should consider how these systems can smoothly transition between a lightweight conversation starter to actual heavier conversation. In particular, we suggest exploring how receivers can better respond to biosignals messages to inform or initiate future conversations.

6.3.1. Recommendations for Response

We recommend response as a new design dimension with two sub-dimensions: response richness and response channel. The former refers to determining situations in which simple or rich responses are appropriate. Animo only enabled simple responses, where participants could send animos back to acknowledge they received their partner’s animo. Richer and more diverse responses in-app could better clarify the intent behind the received animo, and guide follow-up conversation (e.g., if a receiver recognizes a sender’s animo as an expression of stress, they can provide support in a phone call). For example, P18 suggested enabling short-form replies:

“Like, if I received an Animo and I could just respond with, like, three or four questions without having to type anything, but it’d just be like, ‘Sad, smiley, question mark”’ —P18

Response channel refers to the channel through which users can respond to a biosignals message. Animo used a single channel, where participants responded to animos within the app on the smartwatch. Another option is to link the app to other communication channels, such as an in-app call button to link to the phone, or pinned animos that users can send in text conversations as a reference. Having multiple linked communication channels could support smoother entry into full conversations from lightweight systems like Animo.

7. Limitations

Our research provides valuable insights for using smartwatches as a communication platform with biosignals as content. However, our work has some limitations.

We focused on understanding how people would use a mood-sharing smartwatch app and the opportunities and challenges that would arise. Therefore, we chose to create Animo as the minimal viable prototype, putting our efforts towards the design of the system, rather than incorporating accurate mood detection. Future work should investigate how to incorporate enhanced mood detection. For example, in our work, we represented mood based primarily on physiological arousal, but did not determine whether that mood was positive or negative. One way to incorporate valence is to provide options for users to select valence for their mood representation. Providing manual valence options could affect how people understand and find meaning in their biosignals, and subsequently reveal different communication patterns.

Additionally, the demographics of our sample may have biased our results. While the average age of our participants aligns with those who are more likely to own smartwatches, our sample does not include younger adults or children (e.g., 18 and below). Participants were also self-selected, and more than half of them worked at technology companies (and the rest had a partner who did). They likely have more familiarity with technology and may have shown a greater interest in wearables and biosignals than the general public. Our results may not generalize to younger people or people with less familiarity with technology, sensors, or smartwatches.

8. Conclusion

We present Animo, a smartwatch app that enables people to communicate their heart rate, in the form of a colored animated shape, to another person. We detail the design and implementation of Animo, as well as findings from a two-week field study with 17 dyads. Participants connected more through Animo, and became more aware of their partner’s presence and state. Participants communicated in new ways with each other, using Animo both as a lightweight “poke” and a conversation starter. At the same time, Animo revealed design tensions in sensor-based smartwatch communication apps. While participants enjoyed its simplicity, they had to compensate for the lack of context in follow-up communication. They still desired fuller communication with more information and interaction capabilities. We outline a unique design space for smartwatch-based biosignal communication systems, and provide insights on designing for lightweight social connection within this space, including suggestions for enhancing expressiveness, interpretability, and social presence.


  • (1)
  • Amiri et al. (2017) Amir Mohammad Amiri, Nicholas Peltier, Cody Goldberg, Yan Sun, Anoo Nathan, Shivayogi V Hiremath, and Kunal Mankodiya. 2017. WearSense: Detecting Autism Stereotypic Behaviors through Smartwatches. Healthcare 5, 1 (feb 2017), 11.
  • Anderson et al. (2013) Keith Anderson, Elisabeth André, Tobias Baur, Sara Bernardini, Mathieu Chollet, Evi Chryssafidou, Ionut Damian, Cathy Ennis, Arjan Egges, Patrick Gebhard, et al. 2013. The TARDIS framework: intelligent virtual agents for social coaching in job interviews. In Advances in Computer Entertainment. Springer, Netherlands, 476–491.
  • Bales et al. (2011) Elizabeth Bales, Kevin A Li, and William Griwsold. 2011. CoupleVIBE: mobile implicit communication to improve awareness for (long-distance) couples. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW ’11). ACM, ACM, Hangzhou, China, 65–74.
  • Beelen et al. (2013) Thomas Beelen, Robert Blaauboer, Noraly Bovenmars, Bob Loos, Lukas Zielonka, Robby Van Delden, Gijs Huisman, and Dennis Reidsma. 2013. The art of tug of war: investigating the influence of remote touch on social presence in a distributed rope pulling game. In Advances in Computer Entertainment. Springer, Cham, 246–257.
  • Biocca et al. (2003) Frank Biocca, Chad Harms, and Judee K Burgoon. 2003. Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators & virtual environments 12, 5 (2003), 456–480.
  • Birkett (2011) Melissa A Birkett. 2011. The Trier Social Stress Test protocol for inducing psychological stress.
  • Burke and Kraut (2014) Moira Burke and Robert E Kraut. 2014. Growing closer on facebook: changes in tie strength through social network site use. In Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’14). ACM, ACM Press, Toronto, Ontario, Canada, 4187–4196.
  • Buschek et al. (2018) Daniel Buschek, Mariam Hassib, and Florian Alt. 2018. Personal Mobile Messaging in Context: Chat Augmentations for Expressiveness and Awareness. ACM Transactions on Computer-Human Interaction (TOCHI) 25, 4 (2018), 23.
  • Capodieci et al. (2018) Antonio Capodieci, Pascal Budner, Joscha Eirich, Peter Gloor, and Luca Mainetti. 2018. Dynamically Adapting the Environment for Elderly People Through Smartwatch-Based Mood Detection. In Collaborative Innovation Networks: Building Adaptive and Resilient Organizations. Springer International Publishing, Cham, 65–73.
  • Chang et al. (2001) Angela Chang, Ben Resner, Brad Koerner, XingChen Wang, and Hiroshi Ishii. 2001. LumiTouch: an emotional communication device. In CHI’01 extended abstracts on Human factors in computing systems. ACM, ACM Press, Seattle, Washington, 313–314.
  • Curmi et al. (2013) Franco Curmi, Maria Angela Ferrario, Jen Southern, and Jon Whittle. 2013. HeartLink: open broadcast of live biometric data to social networks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Paris, France, 1749–1758.
  • Derks et al. (2008) Daantje Derks, Agneta H Fischer, and Arjan ER Bos. 2008. The role of emotion in computer-mediated communication: A review. Computers in Human Behavior 24, 3 (2008), 766–785.
  • Dunlop et al. (2014) Mark D Dunlop, Andreas Komninos, and Naveen Durga. 2014. Towards high quality text entry on smartwatches. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. ACM, Toronto, Ontario, Canada, 2365–2370.
  • Epstein et al. (2015) Daniel A Epstein, Bradley H Jacobson, Elizabeth Bales, David W McDonald, and Sean A Munson. 2015. From nobody cares to way to go!: A Design Framework for Social Sharing in Personal Informatics. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, ACM, Vancouver, BC, Canada, 1622–1636.
  • Fitbit (2018) Inc. Fitbit. 2018. Fitbit Community Grows to More Than 25 Million Active Users in 2017.
  • Frey et al. (2018) Jérémy Frey, May Grabli, Ronit Slyper, and Jessica R Cauchard. 2018. Breeze: Sharing Biofeedback Through Wearable Technologies. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC, Canada, 645.
  • Gong et al. (2018) Jun Gong, Zheer Xu, Qifan Guo, Teddy Seyed, Xiang ’Anthony’ Chen, Xiaojun Bi, and Xing-Dong Yang. 2018. WrisText: One-handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 181, 14 pages.
  • Griffiths et al. (2014) Erin Griffiths, T Scott Saponas, and AJ Brush. 2014. Health chair: implicitly sensing heart and respiratory rate. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, Seattle, Washington, 661–671.
  • Gunawardena (1995) Charlotte N Gunawardena. 1995. Social presence theory and implications for interaction and collaborative learning in computer conferences. International journal of educational telecommunications 1, 2 (1995), 147–166.
  • Gunawardena and Zittle (1997) Charlotte N Gunawardena and Frank J Zittle. 1997. Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. American journal of distance education 11, 3 (1997), 8–26.
  • Hassib et al. (2017) Mariam Hassib, Daniel Buschek, Paweł W Wozniak, and Florian Alt. 2017. HeartChat: Heart Rate Augmented Mobile Chat to Support Empathy and Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, Denver, Colorado, USA, 2239–2251.
  • Howell et al. (2018a) Noura Howell, John Chuang, Abigail De Kosnik, Greg Niemeyer, and Kimiko Ryokai. 2018a. Emotional Biosensing: Exploring Critical Alternatives. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 69.
  • Howell et al. (2016) Noura Howell, Laura Devendorf, Rundong Kevin Tian, Tomás Vega Galvez, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos, and Kimiko Ryokai. 2016. Biosignals as Social Cues: Ambiguity and Emotional Interpretation in Social Displays of Skin Conductance. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, Brisbane, QLD, Australia, 865–870.
  • Howell et al. (2018b) Noura Howell, Laura Devendorf, Tomás Alfonso Vega Gálvez, Rundong Tian, and Kimiko Ryokai. 2018b. Tensions of Data-Driven Reflection: A Case Study of Real-Time Emotional Biosensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 431, 13 pages.
  • Hsieh and Tseng (2017) Sara H Hsieh and Timmy H Tseng. 2017. Playfulness in mobile instant messaging: Examining the influence of emoticons and text messaging on social interaction. Computers in Human Behavior 69 (2017), 405–414.
  • IJsselsteijn et al. (2003) Wijnand IJsselsteijn, Joy van Baren, and Froukje van Lanen. 2003. Staying in touch: Social presence and connectedness through synchronous and asynchronous communication media. Human-Computer Interaction: Theory and Practice (Part II) 2, 924 (2003), e928.
  • Janssen et al. (2010) Joris H Janssen, Jeremy N Bailenson, Wijnand A IJsselsteijn, and Joyce HDM Westerink. 2010. Intimate Heartbeats: Opportunities for Affective Communication Technology. IEEE Transactions on Affective Computing 1, 2 (jul 2010), 72–80.
  • Jeong et al. (2017) Hayeon Jeong, Heepyung Kim, Rihun Kim, Uichin Lee, and Yong Jeong. 2017.

    Smartwatch wearing behavior analysis: a longitudinal study.

    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 3 (2017), 60.
  • Kaya and Epps (2004) Naz Kaya and Helen H Epps. 2004. Relationship between Color and Emotion: A Study of College Students. College Student Journal 38, 3 (2004), 396.
  • Kaye et al. (2005) Joseph’Jofish’ Kaye, Mariah K Levitt, Jeffrey Nevins, Jessica Golden, and Vanessa Schmidt. 2005. Communicating intimacy one bit at a time. In CHI’05 extended abstracts on Human factors in computing systems. ACM, Portland, OR, USA, 1529–1532.
  • Kim et al. (2016) Minhwan Kim, Kyungah Choi, and Hyeon-Jeong Suk. 2016. Yo!: Enriching emotional quality of single-button messengers through kinetic typography. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, Brisbane, QLD, Australia, 276–280.
  • Konstantinidis et al. (2009) Evdokimos I Konstantinidis, Magda Hitoglou-Antoniadou, Andrej Luneski, Panagiotis D Bamidis, and Maria M Nikolaidou. 2009. Using affective avatars and rich multimedia content for education of children with autism. In Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments. ACM, Corfu, Greece, 58.
  • Kreibig (2010) Sylvia D Kreibig. 2010. Autonomic nervous system activity in emotion: A review. Biological Psychology 84, 3 (2010), 394–421.
  • Kuber and Wright (2013) Ravi Kuber and Franklin P Wright. 2013. Augmenting the instant messaging experience through the use of brain–computer interface and gestural technologies. International Journal of Human-Computer Interaction 29, 3 (2013), 178–191.
  • Lee et al. (2006) Joonhwan Lee, Soojin Jun, Jodi Forlizzi, and Scott E Hudson. 2006. Using kinetic typography to convey emotion in text-based interpersonal communication. In Proceedings of the 6th conference on Designing Interactive systems. ACM, University Park, PA, USA, 41–49.
  • Lee et al. (2014) Myungho Lee, Kangsoo Kim, Hyunghwan Rho, and Si Jung Kim. 2014. Empa talk: a physiological data incorporated human-computer interactions. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. ACM, Toronto, Ontario, Canada, 1897–1902.
  • Liu et al. (2017a) Fannie Liu, Laura Dabbish, and Geoff Kaufman. 2017a. Can Biosignals be Expressive? How Visualizations Affect Impression Formation from Shared Brain Activity. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 71:1–71:21.
  • Liu et al. (2017b) Fannie Liu, Laura Dabbish, and Geoff Kaufman. 2017b. Supporting Social Interactions with an Expressive Heart Rate Sharing Application. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 3 (2017), 77:1–77:26.
  • Lottridge et al. (2009) Danielle Lottridge, Nicolas Masson, and Wendy Mackay. 2009. Sharing empty moments: design for remote couples. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, Boston, MA, USA, 2329–2338.
  • Lu et al. (2016) Xuan Lu, Wei Ai, Xuanzhe Liu, Qian Li, Ning Wang, Gang Huang, and Qiaozhu Mei. 2016. Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, Heidelberg, Germany, 770–780.
  • Massa et al. (2015) Paolo Massa, Chiara Leonardi, Bruno Lepri, Fabio Pianesi, and Massimo Zancanaro. 2015. If you are happy and you know it, say ”I’m here”: investigating parents’ location-sharing preferences. In Human-Computer Interaction. Springer, Bamberg, Germany, 315–332.
  • Miller et al. (2016) Hannah Jean Miller, Jacob Thebault-Spieker, Shuo Chang, Isaac Johnson, Loren Terveen, and Brent Hecht. 2016. ”Blissfully Happy” or ”Ready to Fight”: Varying Interpretations of Emoji. In Proceedings of the 10th International Conference on Web and Social Media. AAAI press, Cologne, Germany, 259–268.
  • Ohly et al. (2010) Sandra Ohly, Sabine Sonnentag, Cornelia Niessen, and Dieter Zapf. 2010. Diary studies in organizational research. Journal of Personnel Psychology 9, 2 (jan 2010), 79–93.
  • Oney et al. (2013) Stephen Oney, Chris Harrison, Amy Ogan, and Jason Wiese. 2013. ZoomBoard: a diminutive qwerty soft keyboard using iterative zooming for ultra-small devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Paris, France, 2799–2802.
  • Palladino (2018) Valentina Palladino. 2018. The $199 Fitbit Versa is the company’s new “mass-appeal” smartwatch.
  • Pancardo et al. (2016) Pablo Pancardo, JA Hernández-Nolasco, Francisco D Acosta, and Miguel A Wister. 2016.

    Personalizing Physical Effort Estimation in Workplaces Using a Wearable Heart Rate Sensor. In

    Ubiquitous Computing and Ambient Intelligence: 10th International Conference, UCAmI 2016, San Bartolomé de Tirajana, Gran Canaria, Spain, November 29–December 2, 2016, Part II 10. Springer, Gran Canaria, Spain, 111–122.
  • Pizza et al. (2016) Stefania Pizza, Barry Brown, Donald McMillan, and Airi Lampinen. 2016. Smartwatch in vivo. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose, California, USA, 5456–5469.
  • Posner et al. (2005) Jonathan Posner, James A Russell, and Bradley S Peterson. 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and psychopathology 17, 3 (2005), 715–734.
  • Robinson et al. (2016) Raquel Robinson, Katherine Isbister, and Zachary Rubin. 2016. All the Feels: Introducing Biometric Data to Online Gameplay Streams. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. ACM, Austin, Texas, USA, 261–267.
  • Rooksby et al. (2014) John Rooksby, Mattias Rost, Alistair Morrison, and Matthew Chalmers Chalmers. 2014. Personal tracking as lived informatics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Toronto, Ontario, Canada, 1163–1172.
  • Rubin et al. (2015) Jonathan Rubin, Hoda Eldardiry, Rui Abreu, Shane Ahern, Honglu Du, Ashish Pattekar, and Daniel G Bobrow. 2015. Towards a mobile and wearable system for predicting panic attacks. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, Osaka, Japan, 529–533.
  • Scissors et al. (2016) Lauren Scissors, Moira Burke, and Steven Wengrovitz. 2016. What’s in a Like?: Attitudes and behaviors around receiving Likes on Facebook. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. ACM, California, USA, 1501–1510.
  • Slovák et al. (2012) Petr Slovák, Joris Janssen, and Geraldine Fitzpatrick. 2012. Understanding heart rate sharing: towards unpacking physiosocial space. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Austin, Texas, USA, 859–868.
  • Smith et al. (2005) Ian Smith, Sunny Consolvo, Anthony Lamarca, Jeffrey Hightower, James Scott, Timothy Sohn, Jeff Hughes, Giovanni Iachello, and Gregory D Abowd. 2005. Social disclosure of place: From location technology to communication practices. In Proceedings of the 3rd International Conference on Pervasive Computing. Springer, Munich, Germany, 134–151.
  • Statista (2018) Statista. 2018. US Wearable Penetration by Age.
  • Tan et al. (2014) Chiew Seng Sean Tan, Johannes Schöning, Kris Luyten, and Karin Coninx. 2014. Investigating the effects of using biofeedback as visual stress indicator during video-mediated collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Toronto, Ontario, Canada, 71–80.
  • Tang et al. (2010) Karen P Tang, Jialiu Lin, Jason I Hong, Daniel P Siewiorek, and Norman Sadeh. 2010. Rethinking location sharing: exploring the implications of social-driven vs. purpose-driven location sharing. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing. ACM, Copenhagen, Denmark, 85–94.
  • Vaizman et al. (2017) Yonatan Vaizman, Katherine Ellis, and Gert Lanckriet. 2017. Recognizing detailed human context in the wild from smartphones and smartwatches. IEEE Pervasive Computing 16, 4 (2017), 62–74.
  • Van Erp and Toet (2015) Jan BF Van Erp and Alexander Toet. 2015. Social touch in human–computer interaction. Frontiers in digital humanities 2 (2015), 2.
  • Walmink et al. (2014) Wouter Walmink, Danielle Wilde, and Florian’Floyd’ Mueller. 2014. Displaying heart rate data on a bicycle helmet to support social exertion experiences. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction. ACM, Munich, Germany, 97–104.
  • Wang et al. (2004) Hua Wang, Helmut Prendinger, and Takeo Igarashi. 2004. Communicating emotions in online chat using physiological sensors and animated text. In CHI’04 extended abstracts on Human factors in computing systems. ACM, Vienna, Austria, 1171–1174.
  • Wiseman and Gould (2018) Sarah Wiseman and Sandy JJ Gould. 2018. Repurposing Emoji for Personalised Communication: Why means “I love you”. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC, Canada, 152.
  • Wong et al. (2018) Pui Chung Wong, Kening Zhu, and Hongbo Fu. 2018. FingerT9: Leveraging Thumb-to-finger Interaction for Same-side-hand Text Entry on Smartwatches. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 178, 10 pages.
  • Xu et al. (2017) Bin Xu, Yang Qin, and Dan Cosley. 2017. De-emphasizing Content to Study the Relationship between Meaning, Messages, and Content in IM Systems. In Proceedings of the 2017 Conference on Designing Interactive Systems. ACM, Edinburgh, United Kingdom, 599–610.
  • Xu et al. (2015) Chao Xu, Parth H Pathak, and Prasant Mohapatra. 2015. Finger-writing with smartwatch: A case for finger and hand gesture recognition using smartwatch. In Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications. ACM, Santa Fe, New Mexico, USA, 9–14.
  • Zhou et al. (2017) Rui Zhou, Jasmine Hentschel, and Neha Kumar. 2017. Goodbye text, hello emoji: mobile communication on wechat in China. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, Denver, Colorado, USA, 748–759.