How the Design of YouTube Influences User Sense of Agency

In the attention economy, video apps employ design mechanisms like autoplay that exploit psychological vulnerabilities to maximize watch time. Consequently, many people feel a lack of agency over their app use, which is linked to negative life effects such as loss of sleep. Prior design research has innovated external mechanisms that police multiple apps, such as lockout timers. In this work, we shift the focus to how the internal mechanisms of an app can support user agency, taking the popular YouTube mobile app as a test case. From a survey of 120 U.S. users, we find that autoplay and recommendations primarily undermine sense of agency, while search and playlists support it. From 13 co-design sessions, we find that when users have a specific intention for how they want to use YouTube they prefer interfaces that support greater agency. We discuss implications for how designers can help users reclaim a sense of agency over their media use.

READ FULL TEXT VIEW PDF

page 6

page 9

page 10

02/23/2021

I Want My App That Way: Reclaiming Sovereignty Over Personal Devices

Dark patterns in mobile apps take advantage of cognitive biases of end-u...
02/01/2019

Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools

Many people struggle to control their use of digital devices. However, o...
03/26/2019

Pricing for Collaboration Between Online Apps and Offline Venues

An increasing number of mobile applications (abbrev. apps), like Pokemon...
06/16/2020

From Ancient Contemplative Practice to the App Store: Designing a Digital Container for Mindfulness

Hundreds of popular mobile apps today market their ties to mindfulness. ...
06/14/2019

Release early, release often, and watch your users' emotions

App stores are highly competitive markets, sometimes offering dozens of ...
02/21/2022

Recommendations to Develop, Distribute and Market Sonification Apps

After decades of research, sonification is still rarely adopted in consu...
07/21/2020

Intelligent Exploration for User Interface Modules of Mobile App with Collective Learning

A mobile app interface usually consists of a set of user interface modul...

1. Introduction

At Netflix, we are competing for our customers’ time, so our competitors include Snapchat, YouTube, sleep, etc.
- Reed Hastings, Netflix CEO (Williams, 2018, p.50)

In the attention economy, social media apps employ a variety of design mechanisms–such as eye-catching notification icons, tempting clickbait, and never-ending autoplay–to maximize their share of the user’s time. In this pursuit, designers and tech industry insiders warn that many of these mechanisms exploit psychological vulnerabilities and harm the interests of the user (Lewis, 2017; Burr et al., 2018).

It is no accident then that social media use is often associated with a loss of sense of agency (Baumer et al., 2018). People self-report that their desire to consume media frequently conflicts with their plans or goals and that they fail to resist about three-quarters of the time (Delaney and Lades, 2017). And loss of control is a key component of many measures of problematic technology use (Cheng et al., 2019).

In response, digital wellbeing researchers have innovated what we term external mechanisms that help users manage or monitor their app use, such as lockout timers (Kim et al., 2019) and productivity dashboards (Kim et al., 2016). While these mechanisms apply universally to many different apps, they do not change the internal mechanisms within an app, such as autoplay, that might lead it to be problematic in the first place.

One promising approach is to redesign these mechanisms for a greater sense of agency, i.e., an individual’s experience of being the initiator of their actions in the world (Synofzik et al., 2008). Low sense of agency over technology use is associated with negative life impacts such as a loss of social opportunities, productivity, and sleep (Caplan, 2010) that often motivate digital wellbeing efforts to begin with. Moreover, a lack of sense of agency itself can be understood as a driver of the dissatisfaction that people often feel with their social media use (Marino et al., 2018).

In this work, we take the mobile app for YouTube, the most widely used social media service in the United States (Perrin and Anderson, 2019), as a test case to understand and redesign how internal mechanisms influence sense of agency. The design of YouTube must balance the interests of many different stakeholders. For example, policymakers may wish to exert control over extremist content. Advertisers may wish to control how much time users spend on ads. Designers may wish to control how much time users spend in the app. Content creators may wish to control how much time users spend on their channel. All of these stakeholders merit consideration, however, in this work we focus specifically on users and how design influences the control they feel over the time they spend in the mobile app.

We investigate two research questions in two studies that build upon each other:

  • RQ1: What existing mechanisms in the YouTube mobile app influence sense of agency?

    In a survey, we asked 120 YouTube users which mechanisms make them feel most and least in control of how they spend their time in the YouTube mobile app.

  • RQ2: What changes to these mechanisms might increase sense of agency?

    Based on the responses to the survey, we redesigned four internal mechanisms to change user sense of agency in the YouTube app: recommendations, playlists, search, and autoplay. In co-design sessions, we then asked 13 YouTube users to sketch changes of their own and evaluate our mockups. We also asked how much control they would prefer to have in different situations.

The two contributions of this work are:

  1. We identify the internal design mechanisms that influence users’ sense of agency over how they spend time in the YouTube mobile app and how they might be changed. While some of these mechanisms are expected (e.g., autoplay), others are less so (e.g., playlists) and suggest promising directions for digital wellbeing (e.g., designing to support ‘microplans’ that guide behavior within a single session of use).

  2. We distinguish when designing for a sense of agency is desirable from when it might actually go against what users want. Participants in our co-design sessions preferred greater control when they had a specific intention for using the app (e.g., to cook a recipe) than when they had a non-specific intention (e.g., to relax), in which case they wanted to let the app take control. We propose ways for designers to navigate this mixed preference for different levels of control at different times.

2. Background and Motivation

2.1. Designing to Undermine Sense of Agency

Design practitioners have raised concerns about dark patterns, interfaces that are designed to manipulate a user into behavior that goes against their best interests (Gray et al., 2018; Lukoff et al., 2021). Brignull’s original types of dark patterns focus on financial and privacy harms to the user (Brignull and Darlington, ). However, given that people routinely report using technology in ways that are a waste of their time and that they later regret (Ko et al., 2015; Hiniker et al., 2016; Lukoff et al., 2018; Ames, 2013), there is a need for research to examine which design patterns prompt such attentional harms for the user. We might term these attention capture dark patterns, designs that manipulate the user into spending time and attention in an app against their best interests.

Tech industry insiders, like the ex-President of Facebook, warn that social media apps are especially likely to innovate and employ design patterns that ”consume as much of your time and conscious attention as possible” (Pandey, 2017). For social games, one such a proposed pattern is playing by appointment, wherein a player must return to play on a schedule defined by the game, or else lose their precious resources (Zagal et al., 2013). For social media, a common suggestion in popular self-help guides is to take back control by turning off notifications (113; A. Kamenetz (2018)). However, it is not yet established that these mechanisms are the ones that lead users to feel a loss of control. For example, some users report that notifications actually reduce their checking habits, since they know they will be alerted when their desired content is ready (Oulasvirta et al., 2012).

YouTube is an important case for better understanding the design mechanisms of attention capture. YouTube has over two billion monthly users worldwide (YouTube, ) and is extremely popular in the U.S., where about three-quarters of adults report using YouTube on their smartphone, with 32 using it several times a day, 19 about once per day, and 49 less often (Perrin and Anderson, 2019). It is also frequently reported as a source of distraction (Aagaard, 2015), suggesting that it is a good site for the investigation of attention capture dark patterns. In particular, Youtube’s algorithmic recommendations merit special consideration as they drive more than 70 of watchtime (Solsman, 2018).

2.2. Designing to Support Sense of Agency

Reducing screentime in certain apps is a common measure of success in digital wellbeing tools. The two most popular mobile operating systems, Android and iOS, both come pre-installed with tools for the user to track and limit their time in mobile apps. Within the YouTube app itself, there are also features to manage time spent: ‘Time watched statistics,’ which shows how much time a user has spent on YouTube in each of the last 7 days, and the ‘Take a break reminder,’ which periodically prompts the user to take a rest. A strength of addressing digital wellbeing via such screentime tools is that time spent is easy to track and easy to understand.

However, a weakness of this approach is that reducing screentime is often a poor proxy for what users actually want. Instead, user intentions are often highly specific, such as wanting to reduce the time spent on targeted features of an app (e.g., on the Facebook newsfeed, but not in Facebook groups) or in certain contexts (e.g., when with family, but not when commuting on the bus) (Lukoff et al., 2018; Lyngs et al., 2020; Hiniker et al., 2016).

Within YouTube, there are two digital wellbeing features that do move beyond time spent controls and offer more granular control. The ‘Notifications digest’ lets a user bundle push notifications together into a single notification each day, which may reduce the triggers that lead to non-conscious, habitual use (Lyngs et al., 2018). ‘Autoplay toggle’ lets a user decide to stop the next video from playing automatically; this may preserve the natural stopping point that comes at the end of the video, a mechanism that has been shown to help users set more deliberate boundaries around use (Hiniker et al., 2018). While the notification digest and the autoplay toggle clearly do more than just track and limit time, it is not immediately clear by what measure of success they might be evaluated.

One promising alternative to the screentime paradigm is to design for sense of agency, the focus of this paper. Sense of agency is a construct that refers to an individual’s experience of being the initiator of their actions in the world (Synofzik et al., 2008). Sense of agency can be broken down into feelings of agency

, that is, the in-the-moment perception of control, and

judgments of agency, that is, the post hoc, explicit attribution of an action to the self or other (Synofzik et al., 2008). In the present paper, we focus on the latter, judgments of agency.

Sense of agency matters for digital wellbeing in at least three ways. First, supporting user control is a common principle in HCI design guidelines (Coyle et al., 2012; Nielsen, 1994; Shneiderman and Plaisant, 2004). Designing for an internal locus of control is one of Shneiderman and Plaisant’s Eight Golden Rules of Interface Design, arising from the observation that users want the sense that they are in charge of an interface and that the interface responds to their actions (Shneiderman and Plaisant, 2004). Second, a low sense of control over technology use predicts greater negative life effects, e.g., internet use leading to missed social activities (Caplan, 2010) and smartphone use leading to the loss of a career opportunity or significant relationship (Jeong et al., 2016). Scales of problematic technology use generally measure both (a) lack of control and (b) negative life impacts, suggesting that ‘the problem’ is a combination of these two factors (Cheng et al., 2019; Cash et al., 2012). Third, and perhaps most importantly, sense of agency matters in its own right. Feeling in control of one’s actions is integral to autonomy, one of the three basic human needs outlined in self-determination theory (Ryan and Deci, 2006). More specific to technology use, it is also central to user (dis)satisfaction with smartphones (Davis et al., 2019; Harmon and Mazmanian, 2013) and Facebook use (Cheng et al., 2019; Marino et al., 2018).

Prior work has investigated different ways that interfaces can support sense of agency. First, some input modalities seem to support a greater sense of agency than others (e.g., keyboard input versus voice commands) (Limerick et al., 2015). Second, a system’s feedback should match a user’s predicted feedback (Limerick et al., 2014). Third, a study of flight navigation systems found that increasing the level of automation reduced sense of agency (Berberian et al., 2012). These lessons might be revisited in the domain of digital wellbeing, as how an interface modulates sense of agency may vary with context (Limerick et al., 2014).

2.3. Design Mechanisms for Digital Wellbeing

The mechanisms111We use the term mechanism to describe one component of a larger design (although some digital wellbeing designs do consist of a single mechanism) of digital wellbeing interventions can be placed along a spectrum (see Figure 1). At one end are external mechanisms that monitor or police apps, such as screentime statistics and lockout timers. A hallmark of an external mechanism is that it functions identically across multiple apps, as in a timer that locks the user out of social media, gaming, and video apps. However, external mechanisms do not significantly change the experience within individual apps.

Figure 1. Mechanisms that influence how people spend their time in apps can be placed along a spectrum, as in these examples. External mechanisms monitor or police apps, while internal mechanisms redesign or rebuild the experience within a problematic app. Internal mechanisms offer designers a more targeted way of supporting user agency.

At the other end of the spectrum, internal mechanisms contribute to the redesign or rebuild of an experience. For example, Focus Mode in Microsoft Word redesigns the writing process by hiding all formatting options (Baab-Muguira, 2017). Going a step further, the standalone app Flowstate not only offers a minimal interface, but also deletes all text on the page if the user stops writing for longer than seven seconds (Statt, 2016). Internal mechanisms fundamentally change the experience within a problematic app, or rebuild it into a new experience entirely.

At present, design researchers have innovated many tools on the external side of the spectrum, that monitor and police multiple apps in the same way (Kim et al., 2019, 2019; Collins et al., 2014; Monge Roffarello and De Russis, 2019; Okeke et al., 2018). Likewise, industry designers have built tools that apply the same time lockout mechanism to all apps, such as the screentime tools that come pre-installed on Android and iOS.

In contrast to external mechanisms, the space of internal mechanisms is relatively underexplored (see (Lottridge et al., 2012; Harambam et al., 2019) for notable exceptions), but holds particular promise for increasing user agency in two ways. First, designers can craft more targeted interventions with internal mechanisms than with external ones. External mechanisms, such as locking the user out of a device, often require sacrifices that users are reluctant to accept (Tran et al., 2019; Kim et al., 2019). Whereas an external mechanism might block the Facebook app after time is up, a more internal could reconfigure the newsfeed to show only content from close personal friends. A redesign of internal mechanisms may be able to remove problematic aspects from an app, while still retaining its benefits.

Second, internal mechanisms shift the focus from fighting distractions to aligning interests. External mechanisms often respond to the temptations of problematic apps with microboundaries (Cox et al., 2016) or restraints on interactions (Park et al., 2018). However, this sets up an arms race in which the designers of digital wellbeing tools are always in a defensive position. An alternative is for designers to reenvision the internal mechanisms that lead to compulsive use in the first place (Tran et al., 2019). Looking at the mechanisms inside of specific apps may encourage designers to not just block existing mechanisms but to innovate better ones, such as Flowstate’s seven seconds rule for writing. This paper presents an examination how such internal mechanisms can be redesigned to support sense of agency.

3. Study 1: Survey of 120 YouTube users

Study 1 examines how existing mechanisms in the YouTube mobile app support or undermine sense of agency (RQ1). We decided to start by investigating user’s experiences in the current app before proceeding to design and evaluate potential changes in Study 2 (RQ2). Both studies were approved by the University of Washington’s Institutional Review Board.

3.1. Participants

3.1.1. Recruitment.

To obtain a general sample of users of the YouTube mobile app, we recruited from Amazon Mechanical Turk workers in the United States. Participants were invited to Help us understand how people spend their time on the YouTube mobile app. They were required to meet four inclusion criteria:

  1. A task approval rating greater than 98 for their prior work on Mechanical Turk, indicating a history of high-quality responses.

  2. Own a smartphone. Three members of our research team tested the YouTube mobile app on both Android and iPhone and found that the app has nearly identical features and only minor stylistic differences, so we accepted users of both types of devices as participants (80 Android, 40 iPhone users).

  3. Spend a minimum of 3 hours on YouTube in the past week (across all devices), according to their time watched statistics in the YouTube app. In the survey, participants saw instructions with screenshots that showed where to find this statistic in the app, confirmed that they had found it, and then entered it into the survey. To see time watched statistics, users must be signed into the app.

  4. Of the time they spend on YouTube, 20

    or more is on their smartphone (self-estimated).

3.1.2. Demographics.

A total of 120 participants met the inclusion criteria and completed the survey (see demographics in Table 1). We excluded responses from an additional 7 participants who started but did not complete the survey. We oversampled men, Asians, and young people relative to the 2019 estimates of the United States Census Bureau (United States Census Bureau, ). Other participant samples may use the YouTube mobile app differently, e.g., users in emerging countries for whom a smartphone is often their only device for watching videos (Silver et al., 2019). Further research is required to determine whether our results apply to other populations.

Gender identity Man (63%), Woman (36%), Non-binary (0%), Prefer not to say (1%)
Age range 18-24 (8%), 25-34 (41%), 35-44 (40%), 45-54 (11%), 55+ (1%)
Education High school (22%), Associate degree (22%), Bachelor’s degree (46%), Advanced degree (11%)
Household income (US) ¡25K (14%), 25-50K (23%), 50-75K (30%), 75-125K (20%), ¿ 125K (11%), prefer not to say (2%)
Race (choose one or more) White (69%), Asian (17%), Black (9%), Hispanic/Latino (4%), Native American (2%)
Table 1. Demographics of the 120 survey participants

3.1.3. YouTube use.

Participants spent a median of 101 minutes per day (interquartile range: 57-156) on YouTube across all devices in the week prior to the survey. Of this time, participants estimated they spent a median of 50 (interquartile range: 30-75) in the mobile app. For comparison, the YouTube press page states that mobile accounts for over 70 of watchtime (YouTube, ). Upon multiplying these two responses together for each participant, we found that participants spent an average of 70 minutes per day in the YouTube mobile app. This is similar to the average for all YouTube users: in 2017, YouTube shared that signed-in users spend an average of more than 60 minutes per day in the mobile app (Matney, 2017). We neglected to ask whether participants were using the paid YouTube premium service, which removes ads and can play videos offline and in the background; however, Google reports that only 1 of YouTube’s monthly visitors subscribe to this service (Spangler, ).

3.2. Procedure

Participants answered questions in an online survey. The initial questions asked about our four inclusion criteria. Eligible participants continued on to background questions about their demographics and YouTube use. The complete survey wording, along with all of the other appendices for this study can be found at: https://osf.io/w3hmd

To investigate RQ1, one question table asked about things that made participants feel most in control of how they spend their time on YouTube (See Table 2). A second question table asked about things that made them feel less in control. The order of these two question tables was randomized. In terms of wording, we chose to ask about feeling ”in control,” as this is how sense of agency has been measured in previous studies of sense of agency in HCI (e.g., (Metcalfe and Greene, 2007)) and on a self-report scale (Tapal et al., 2017). We used the informal term things because, in piloting the survey, we found that testers were unsure about whether certain things (e.g., recommendations and ads) counted as mechanisms of the app and we did not want to provide examples that would bias responses. In total, each participant was required to submit 6 responses for things that influenced their sense of agency on YouTube (3 for most in control, 3 for least in control).

Thing Question: What are 3 things about
the mobile app that lead you to feel most
in control over how you spend your time
on YouTube?
Explain Question: How does this thing
make you feel more in control of how you
spend your time on YouTube?
Thing 1
“I am able to quickly access my subscribed
channels.”
“I don’t spend uncontrolled amounts of time
browsing through videos that may or may not
be related to what I want to watch.”
Thing 2
“I am able to get notifications of certain
channels or videos getting posted.”
“I will know exactly when a new video goes
up that I may be interested in watching.
This way I am not randomly checking for
uploads and spending extra time searching
and browsing.”
Thing 3 “Screen/watch time.”
“I can follow trends and tell when I am
spending more time than usual on the app.”
Table 2. The wording and format of the “more in control” question in the survey. The example responses here come from a single study participant. All participants also completed a second version of this question table, with the text modified from “most” to “least” in the Thing Question and from “more” to “less” in the Explain Question.

Participants were compensated 6.00 for answering all questions, an amount that exceeds the U.S. minimum wage (7.25 per hour). The survey took a median of 21 minutes to complete (interquartile range: 15-29).

3.3. Coding reliability thematic analysis

We conducted a coding reliability thematic analysis (Boyatzis, 1998; Braun et al., 2018), in which we first established reliable codes for design mechanisms and then used them to generate themes that captured shared meanings. We started by iteratively coding the 720 responses (6 per participant). Each thing was analyzed as a single response, combining answers to the Thing Question and the Explain Question (i.e., one row in Table 2). In our first pass, two researchers individually reviewed all responses and met to develop initial codes. At this stage, we eliminated 112 responses without any substantive content, e.g., “I can’t think of anything else.” Of the 112 responses without substance, 55 came from “less in control” and 57 from “more.”

We further limited coding to responses that specified a mechanism within the interface of the YouTube mobile app, i.e., something the app’s designers could directly change. This included responses such as, Recommended videos - Being shown recommended videos is like a moth to a light for me, which was coded as ‘recommendations’. It excluded responses about situational factors that are largely outside of the control of the designer such as, I make my own decisions - I am a conscious person who can make decisions on what I do. This eliminated 141 more responses (59 from “less in control” and 82 from “more in control”). Interestingly, “more in control” included 28 responses that we coded as willpower, e.g., I make my own decisions, with only 1 such response for “less”. This suggests a potential self-serving bias (Forsyth, 2008) wherein in-control behavior is attributed to one’s own willpower whereas out-of-control behavior is attributed to external factors. The other responses that we removed were about characteristics of mobile phones (e.g., “The app is easy to access and tempt me on my phone…”) and usability issues (e.g., “it crashes on me every other day or so” and “it consumes a lot of battery life”) that are not specific to the interface of the YouTube mobile app. After excluding these responses, we continued with coding the 467 responses that referenced a specific design mechanism.

In our second pass, we applied the initial codes to 120 randomly selected responses and met to discuss. Since one mechanism (recommendations) came up more often than all others, we developed three subcodes for how recommendations affected participant experiences on YouTube. After merging similar codes, our codebook consisted of 21 design mechanisms, such as autoplay, playlists, and multiple device sync. In our third pass, we each independently coded the same 50 randomly selected responses. Interrater reliability was assessed using Cohen’s kappa, with = 0.73 indicating substantial agreement (Landis and Koch, 1977). In our fourth pass, we each coded half of the remaining responses, discussed the final counts, and selected several representative quotes for each code. The first author then wrote up a draft of the coding results and reviewed together with the other authors. We mapped codes (design mechanisms) to potential themes, generating three higher-level themes that structured our final writeup. In our analysis and writeup, we noted cases where responses for an individual code were split with regards to a theme, e.g., ‘notifications’ sometimes supported and sometimes undermined ‘planning ahead’.

3.4. Results and Analysis

3.4.1. Design Mechanisms.

467 responses referenced a specific design mechanism (246 for less in control, 221 for more in control). Nine mechanisms were described as influencing sense of agency 15 or more times and are the focus of our analysis.222 Mechanisms mentioned 15 or more times covered 392 of 467 responses (84) that referenced a design mechanism. Mechanisms mentioned fewer than 15 times included content moderation (12), playing videos in the background (12 responses), syncing across multiple devices (9), comments (9), ratings (8), and YouTube’s ‘Take a break reminders’ (5). The 6 remaining mechanisms were mentioned fewer than 5 times each. Figure 2 provides a glanceable view of how many times each of these nine mechanisms was mentioned as leading to more or less control. Table 3 shows the same data with a description and example response for each mechanism. Appendix I contains annotated screenshots that show the exact implementation of these nine mechanisms in the YouTube mobile app as they appeared when participants provided their feedback.

In summary, recommendations were the most frequently mentioned mechanism, accounting for 27 of all responses. Recommendations, ads, and autoplay primarily made respondents feel less in control. Playlists, search, subscriptions, play controls, and watch history stats primarily made respondents feel more in control. Notifications were divided with about half of responses in each direction.

Figure 2. This diverging bar chart shows how many times these nine design mechanisms led participants to feel more control or less control. Recommendations, ads, and autoplay primarily made respondents feel less in control. Playlists, search, subscriptions, play controls, and watch history stats primarily made respondents feel more in control. Notifications were sometimes mentioned as leading to more control and sometimes to less.
Design Mechanism Description
Count
of re-
sponses
Less in
control (%
of responses)
Representative quote(s)
(2 quotes if minority opinion on
direction of control ¿20% of responses)
Recommendations…
(see 3 subcodes
below)
Recommended videos on the home,
explore, & video player screens.
128 77% See subcodes in the 3 rows below.
/ Irrelevant
recommendations
Repetitive, dull, or
generic recommendations that the
user is not interested in.
42
(of 128)
100%
“The related videos are sometimes videos I’ve seen before,
over and over.”
/ Relevant
recommendations
Engaging or catchy recommenda-
tions that the user is interested in.
45
(of 128)
53%
“YouTube has very good algorithms that know what I like,
when I want it.” —VS.— “I have a hard time not looking
at the suggested videos that the algorithm picks for me…
I almost always justify watching just one more video.”
/ Customization
settings
Settings to customize location,
quantity, or content of
recommendations.
41
(of 128)
81%
“Not having control over the trending list. I feel like I’m
force-fed content.”
Ads
Ads that appear before, during, and
after videos in the player.
55 98%
“I feel as if I am forced to watch ads, which can suck up
a lot of time.”
Playlists (includes
Watch Later)
Creating, saving, and playing a list
of videos. Watch Later is a default
playlist for all users. Playlists
autoplay all videos on the list.
39 0%
“I can create playlists or queue videos in advance to limit
what I watch to a specific list instead of endlessly searching
around for what I want.”
Search Searching for videos. 36 33%
“Very efficient and relevant searches.” —VS.— “Countless
videos have nothing to do with my latest search request.”
Subscriptions Follow specific video creators. 35 0%
“I can choose the content creators I want to follow so that
I can limit my time to specific creators I enjoy the most.”
Autoplay
Automatically plays a new video
after the current one. Can be
toggled on/off.
32 87%
“I feel like I have little control whenever YouTube takes it
upon itself to just play whatever it feels like playing.”
Watch history
& stats
A chronological record of videos
watched and time watched stats in
YouTube.
28 7%
“I am able to view EVERYTHING I do in the app. I
can keep an eye if I need to change behavior, what type of
videos I watch, everything.“
Play controls
Controls to play/pause, seek for-
ward/back, etc.
24 12%
“I can start, pause and stop content streaming easily, at
any time.”
Notifications
System and in-app alerts with new
subscription content, recommenda-
tions, etc.
15 53%
“If I especially like a channel I can know about everything
they upload as soon as they do.” —VS.— “Notifications
draw me to YouTube and create my schedule for 20-30
minutes. This creates an addiction.”
Table 3. This table shows nine design mechanisms that were mentioned 15 or more times in response to the survey question: What are 3 things about the mobile app that lead you to feel [most least] in control over how you spend your time on YouTube? Design mechanisms are shown in the order of frequency of mention. The most frequently mentioned mechanism, recommendations, is shown with 3 subcodes. The representative quote(s) column shows one typical response for each design mechanism; both a more in control and a less in control quote are shown if the minority opinion on the direction of control was more than 20 of total responses.

How Existing Mechanisms Influence Sense of Agency 
The design mechanisms we identified in the YouTube mobile app informed three higher-level themes. First, users experience actions in the app along a spectrum of consent. Second, mechanisms for planning ahead help them feel more in control. Third, the accuracy of YouTube algorithms has mixed consequences for control. The writeup for each theme draws upon examples from our coding of the design mechanisms.

3.4.2. The spectrum of consent.

Participants’ sense of agency depended on whether it felt like they had ‘agreed’ to the actions of the app. Participants gave their active consent through actions such as tapping on a play control: I’m watching a video that’s taken too long of my time, so I can just pause it and come back to it. I feel control there. Participants could also issue ongoing consent for the app, e.g., by subscribing to a creator: My subscriptions show me what I asked to see and I can choose what and when I wish to watch each video. At the other end of the spectrum were mechanisms like autoplay that acted without consent: It feels weird for the app to start acting before I’ve told it to do anything.

Non-consent was often felt as a result of (perceived) deception. For example, users disliked ads, but also expected them and indicated their reluctant consent. However, they seemed more upset when the app was unpredictable or violated expectations, as in: I understand the reason for the ads, but I don’t get why some are 5 seconds and you can skip them while others are 60 seconds and you can’t. Other cases where participants felt manipulated included when a small accidental click triggered an ad, when video creators were not upfront about the products they promoted, and when autoplay automatically turned on. Participants disliked when the app openly acted against their interests, but expressed stronger sentiments when they felt that the app also misled them about it.

3.4.3. Planning ahead.

Participants felt more in control when they planned their consumption in advance. Playlists helped participants plan how much to watch (e.g., I can create playlists or queue videos in advance to limit what I watch to a specific list instead of endlessly searching around for what I want). Participants described the end of a playlist as a good place to stop, in contrast to browsing recommendations, which they described as endless. Watch Later, a default playlist on YouTube, also let participants control when and where to watch. A guitar teacher described how Watch Later empowered them to save videos on-the-go and watch them later in their music studio. Watch history stats also supported planning by providing an awareness that participants could use to adjust their behavior: I can look at my watch history and see how many videos I have watched today. That puts it into perspective if I should spend time doing something else if I am spending too much time on YouTube. Several participants described using this awareness in conjunction with the Watch Later playlist: I am able to put a video in my Watch Later playlist if I think I have spent too much time on YouTube for the day.

By contrast, sense of agency was diminished by mechanisms that prompted and pressured participants with suggestions that were hard to decline. Autoplay and recommendations frequently led to this, as in I often spend more time than I meant to because there is a good related video that seems worth watching so ya know, ‘Just one more’ which becomes a couple hours. The Watch Later playlist again served as a safety valve in just one more situations: Watch Later means I don’t feel pressured into watching a recommended video from autoplay right when I see it.

Notifications sometimes supported planning and sometimes not. For example, they put participants on the spot: Based on my viewing history, the app will push me new content and I may not have the fortitude to not click to view. However, notifications also helped participants plan when to check the app by reducing their fear of missing out: With notifications I will know exactly when a new video goes up that I may be interested in watching. This way I am not randomly checking for uploads and spending extra time searching and browsing. This may explain why notifications were split between more in control and less in control responses (47 vs. 53 ).

3.4.4. The accuracy of algorithms has mixed consequences for control.

Irrelevant recommendations, i.e., those that were repetitive or unrelated to personal interests, universally undermined sense of agency: Seeing ’recommended’ videos that have nothing to do with my viewing history leads to unwanted scrolling and possibly unwanted content. Similarly, irrelevant search results undermined control because they forced participants to keep scrolling for what they wanted, e.g., I use specific search terms, but I still have to scan past a lot of vaguely or even unrelated stuff to find what I want.

For relevant recommendations, participants’ control responses were divided nearly 50-50. In contrast to irrelevant recommendations, relevant ones supported control with their personalization (e.g., It has some very good algorithms that know what I like, when I want it ) or with suggestions that reached just beyond the users’ comfort zone (e.g., I can expand my tastes based on my own preference ). However, relevant recommendations sometimes undermined control by being too engaging, i.e., recommending videos that users watch, but that are unplanned and later regretted. This was captured in participants’ use of terms like the wormhole (two mentions) and rabbit hole (five mentions), as in The way that videos get promoted to my home page and have appealing thumbnails–I end up clicking on them and wonder how I got to this place and why I am watching this video. I ended up going down the rabbit hole and watching the video and then others like it and so on. Some of these recommendations were described as clickbait (six mentions) that misled with content that did not meet expectations and sometimes also violated participants’ consent (e.g., by showing inappropriate content). More often though, participants seemed to like the content, but felt that it was too much (e.g., At times there is no escape when I become interested in documentary after documentary) or not the right time (e.g., Some of the church videos are addicting and I keep watching them at night).

Given their mixed experiences with recommendations, participants expressed frustration with the customization settings at their disposal (or lack thereof). Participants lacked the ability to customize the location, quantity, and content of recommendations. Having recommendations on almost every screen led to a loss of control: It seems like there are video recommendations everywhere. They are obviously in my home feed; they are in the explore menu; and they are under and beside and within other videos. It often takes me down the rabbit hole. Up next recommendations that appear below the current video (and autoplay after it finishes) were specifically mentioned seven times. The endless quantity of recommendations also made it hard to stop watching. Finally, participants also wanted to control what content is recommended, particularly when recommended content did not match their aspirations: There are cases in a particular day where I just want to watch cat videos. But I do not want my entire screen to recommend cat videos. Participants wanted to customize the content of recommendations more directly than just by generating a watch history: The only thing you can do to control the algorithm is to watch videos. But you get no say how it’ll recommend new ones.

A minority of responses described recommendation settings that do support sense of agency. For instance, three participants appreciated how the settings menu () allows them to mark Not interested on specific videos, e.g., When I’m tempted but know a video is not educational I can hide it. In this case, the user is in fact interested in the sense that the video recommendation arouses their curiosity and attention. However, they must paradoxically mark it as “Not interested” in order to tell the interface to stop showing videos of this kind because they conflict with their longer-term goals. YouTube’s settings also allow participants to delete videos from their watch history–which stops them from being used in personalized recommendations–but only one participant mentioned this feature. The vast majority of participants seemed either unaware of YouTube’s existing customization settings for recommendations or found them inadequate.

4. Study 2: Co-design with YouTube users

Study 1 identified existing mechanisms in the YouTube mobile app that influence user sense of agency (RQ1). In Study 2, we sought to understand how changes to these design mechanisms might influence sense of agency (RQ2). We conducted 13 study sessions with individual YouTube users that included two co-design activities: 1) sketching participant-generated changes; and 2) evaluating researcher-generated changes that were based on the results of Study 1. Consistent with a research-through-design approach (Zimmerman and Forlizzi, 2014), the aim of these activities was not to converge upon a single solution but rather to generate knowledge, i.e., what to design for a sense of agency.

4.1. Preparatory Design Work

In preparation for the evaluation co-design activity, five of the authors (KL, HZ, JVL, JC, KF), all advanced-degree students in a technology design program, created mockups of changes to mechanisms in the YouTube mobile app that we expected to impact sense of agency. To generate a wide range of possible changes, we started with a design brainstorm that generated 67 different ideas, e.g., creating a ‘How-to mode’ for viewing only educational content, reducing video playback speed to 50 after a daily time limit is exceeded, or making Watch Later the default action for recommendations. Ideas were reviewed as a group and favorites could be ‘claimed’ by one author who further refined it. This generated a total of 33 different sketches. We presented, discussed, and then scored these sketches according to three criteria: expected impact on sense of agency (based on the results of Study 1), novelty relative to existing digital wellbeing tools, and feasibility of implementation.333Feasibility was a criterion to focus on designs that a third-party mobile developer could build using public APIs, an intention we have for our future work. Expected effect on sense of agency was weighted twice in our scoring.

We created mockups for the seven sketches with the highest average scores. We wanted participants to evaluate a variety of potential changes to each mechanism, so we created three versions of each mockup: low, medium, and high-control. For example, the recommendations mechanism in the YouTube app was redesigned to change the number of recommendations shown on the homepage, with the low-control version showing unlimited recommendations, the medium-control version showing only three recommendations with a button to show more, and the high-control version not showing any recommendations (see images in Table 4). To focus on RQ2, our results and analysis here address only the four mockups (see Table 5) that directly change one of the existing internal mechanisms in YouTube that we identified in Study 1. The other three mockups we tested—activity-goal setting, time-goal setting, and a timer—are more external mechanisms that might apply equally well to other apps. However, we decided to focus this paper on the unique potential of internal mechanisms.

We note that although our research focuses at the level of ‘design mechanisms,’ the details of these designs matter. For instance, although the recommendations in the current version of YouTube seemed to reduce sense of agency in most of the Study 1 responses, a different implementation of ‘recommendations’ might produce different effects. This is true of our mockups too: in our search redesign we showed a task-oriented example query (How to cook a turkey), whereas a leisure-oriented example query (e.g., Funny cat videos) could have led to different results. We include descriptions of the most relevant details of each of these design mechanisms in the body of the paper, screenshots of their current implementation in the YouTube mobile app in Appendix I, and images of all of our mockups in Appendix II.

Low-control version:
Unlimited recommendations
Medium-control version:
Click-to-show-more-recommendations
High-control version:
No recommendations
Table 4. Mockups of the redesign of the recommendations mechanism. We created three versions of the mockup that we expected to offer different levels of control. These 3 versions of each redesign were evaluated by participants in the co-design evaluation activity.
Redesigned
mechanism
Dimension of change
Low-control
version
Medium-control
version
High-control
version
Related experience for users (as de-
scribed by Study 1 participants)
Comparison to
current version of
YouTube mobile app
Recommendations
Number of video recom-
mendations on home screen
Unlimited recom-
mendations
Shows 3 recommen-
dations, then a click-
to-show-more button
No
recommendations
Endless recommendations often
undermine sense of agency
Similar to low-control
version
Playlists
Prominence of button to
save a video to the Watch
later playlist
No Watch Later
button
Small Watch later
button
Large Watch Later
button
Watch Later playlist lets users plan
ahead, reduces pressure to watch now
Similar to medium-
control version
Search
The degree to which search
prioritizes fun vs. relevant
results (see Appendix II for
more details)
Prioritize “fun” re-
sults (intended to
be too engaging)
User can toggle
between “fun” &
“relevant” results
Prioritize “relevant”
results
Sometimes recommendations and
search results that are too engaging
undermine sense of agency
Similar to medium-
control version
Autoplay
The degree of user consent
required to play the next
video recommendation
Autoplay the next
recommendation
Show the next recom-
mendation
No next
recommendation
Autoplaying videos without consent un-
dermines sense of agency
Similar to low-control
version
Table 5. This table describes our redesigns of 4 existing mechanisms in the YouTube app. We created three versions of each mockup that we expected to provide different levels of control to the user: low, medium, and high. Appendix II describes more details about the search redesign and the three additional mockups we created, which we do not report on here.

4.2. Participants

4.2.1. Recruitment.

We recruited YouTube users in Seattle via email lists and social media channels to Help us understand how people spend their time in the YouTube mobile app. We did not initially set inclusion criteria for participation (beyond adult YouTube users) as we viewed our co-design activities as exploratory. However, after our initial sessions proved insightful for our team of design researchers, we sent a follow-up survey to participants that asked about demographics and YouTube use. Participants were compensated with a 30 voucher.

4.2.2. Demographics and YouTube use.

13 YouTube users (7 women, 6 men) participated in our sessions. The median age was 29 (range: 18-36). Participants reported using YouTube a median of 52 minutes per day (range: 27-70), again based on checking their time watched statistics in the YouTube mobile app. For reference, this amount of time is slightly lower than the average of signed-in YouTube users (60 minutes) (Matney, 2017) and considerably lower than the median of participants in Study 1 (101 minutes).

4.3. Procedures

Sessions included an initial think-aloud demonstration of their current YouTube use, followed by sketching and evaluation co-design activities. The median length of a session was 73 minutes (range: 57-105 minutes).

4.3.1. Think-aloud Demonstrations with YouTube App.

In a modified version of a think-aloud-protocol (Jääskeläinen, 2010), the participant opened YouTube on their smartphone and talked us through a typical engagement cycle (how they start and stop use) (Tran et al., 2019). Next, they showed and talked us through the mechanisms that made them feel most and least in control of how they spend their time on YouTube.

4.3.2. Co-design Activity 1: Sketching.

To elicit participant-generated ideas, we asked participants to sketch over paper mockups of three key screens: home, search, and video player (see Figure 3). Each screen represented a minimal version of a video app without recommendations, rather than a direct copy of the current YouTube interface. We chose this minimal version to encourage participants to generate new ideas, rather than to evaluate the existing interface (which we did in Study 1). Participants were handed a pen and a copy of one mockup (e.g., the home screen) and were asked, What would you change on this page to feel more in control of how you spend your time on YouTube? They then received a second copy of the same mockup and were asked to sketch changes that would make them feel less in control. Each participant created a total of six sketches (two versions of three different screens). As they sketched, participants were asked to explain their thinking (Schrage, 1996).

4.3.3. Co-design Activity 2: Evaluation.

To receive feedback on our changes from YouTube users, we asked participants to evaluate our mockups of the redesigned mechanisms in the YouTube mobile app (see Table 5). For each mockup, the three different versions were placed in front of the participant in a random order, they reviewed for about one minute, and then asked any questions they had. We did not tell participants which one was the low, medium, or high-control version. The participant was then asked to rank the three versions in order from the one they would least prefer to use to the one they would most prefer, and explain why.

Figure 3. Sketches of the home screen of the YouTube mobile app. The participant (P11) explained that in the more in control version, recommendations are based on topics chosen by the user. In the less in control version, the user needs to scroll through recommendations to see the search bar at the bottom of the screen.

4.4. Codebook Thematic Analysis


We used codebook thematic analysis to analyze the data (Braun et al., 2018; Braun and Clarke, 2006), wherein we generated themes that are more interpretive than just a summary of all of the data, but less interpretive than in reflexive thematic analysis where the researcher’s subject position plays a central role in the analysis (Braun and Clarke, 2019). After each co-design session, the researcher leading the session completed a debriefing form with their top three takeaways and shared participant sketches with the rest of the research team. We held weekly meetings to discuss these data and discuss initial ideas. After finishing data collection, all co-design sessions were transcribed. To further familiarize ourselves with the data, three of the authors read the transcripts and again reviewed the sketches. We next independently coded the data using a web app for collaborative coding (Sillito, ) to generate our set of initial codes. After reviewing this first pass of coding together, we refined and consolidated codes and generated initial themes. Our final set of codes included: user freedom of choice, situational features affecting control, design mechanisms for control, setting clear expectations for the user, and triggers to stop, each of which had further subcodes. We applied our codes to all transcripts and sketches and reviewed the results to create our final themes. For each theme, we extracted vivid exhibits (Bannon et al., 1994), which we used to write analytical memos.

4.5. Results and Analysis

We generated two themes about how participants expected changes to the design mechanisms of YouTube would affect their sense of agency. First, participants wanted design mechanisms that provided more control when they had an intention in mind as opposed to when they just wanted to explore. Second, participants envisioned and wanted mechanisms for active and informed choices to increase control.

4.5.1. Specific intentions call for more control.

When individual participants reviewed the different versions of their own sketches and our mockups, they were often conflicted about how much control they preferred. It depended upon the situation. When they had a specific intention or goal for their YouTube visit (e.g., to cook a recipe), they wanted design mechanisms that provided greater control. When they had a non-specific intention such as relaxing, they preferred design mechanisms that turned control over to YouTube.

For participants, specific intentions varied from watching a video of a favorite dance, to the latest basketball highlight, to a tutorial on solving a Rubik’s Cube. When they had such a specific intention in mind, they wanted greater control than YouTube currently gives them. P4 removed recommendations from their sketch, explaining: If I have a specific goal, I know what I want, I don’t need recommendations to guide my search, I just want to be in control of my search. P2 evaluated our redesign of the search mechanism that emphasized results with higher entertainment value by saying,

I’m probably going to click on it because it’s cute and I’m just going to waste so much time. So it’s going to make me feel totally out of control of what I actually wanted to come here for.

In these cases, participants wanted stronger control mechanisms so that the app would not hijack their specific intention.

Sometimes participants held intentions with a moderate level of specificity, in which case participants wanted to retain some control but also delegate some to YouTube. Often these intentions were topical, as in when P11 wanted to be able to use the app in an active way to search and browse videos about programming, but not in a passive way to follow just any recommendation. Sometimes, these intentions were temporal, such as when working or studying, participants preferred a version of YouTube that helps them watch a moderate number of videos without making them fall down a rabbit hole of similar related stuff (P13). To address these cases, participants sketched both changes to internal mechanisms that were specific to YouTube (e.g., limits on the number of recommended videos) and also more external mechanisms that might apply across a variety of social media apps (e.g., time reminders).

By contrast, when participants had only a non-specific intention (e.g., to unwind or explore), they wanted YouTube to lead the way. Our redesigns of the recommendations mechanism showed either unlimited, limited, or no video recommendations, to which P2 responded If I came here for a specific reason, like my goal is to learn how-to do something, then I prefer this one without recommendations. However, if I just want to watch something that gets my mind off things, I prefer the one where I can choose to show more recommendations. At times when participants just wanted to be entertained, designing for greater control could actually get in the way. P13 shared, If you’re not giving me recommendations, and if you’re making me search, then I’m not in control. Or, I’m in control, but the problem is I’m spending more time. There’s no point.

4.5.2. Active and informed choices.

The Study 1 theme Spectrum of consent addressed whether the user had ’agreed’ to an action taken by the app (e.g., autoplaying the next video). To support control, Study 2 participants envisioned more active choices, where the user felt like they were the one to initiate the action. As a step in this direction, P1 described a home screen that presented, Six categories we think you’re most interested in, and then you’re at least making the active choice, ‘I want to watch some interviews right now.’ In this design, the app’s algorithm would recommend a set of personalized topics, but the user would be the one to choose between them. A still more active choice was when the user was the one to generate the set of choices in the first place, as in P7’s sketch: There aren’t a billion recommendations on the home screen. It’s just a search bar. You go straight to what you want to watch, you watch it, and then you’re done. Participants described search as a paragon of user-led choice, and many foregrounded the search option in their sketches to increase control and hid it in ones to decrease control (see Figure 3).

Many sketches also supported more informed choices. These designs made it easier for users to know what to expect from a video by surfacing metadata like view count, user ratings, and descriptions. Five participants proposed novel metadata, such as an ‘activity time’ filter that would sort how-to videos by the time it takes to perform the activity they teach, e.g., cook a recipe (P12). Another suggested expert ratings as an indicator of quality (P5). Conversely, in sketches to undermine control, it was common to remove video metadata. P12 likened this to the experience at Costco, a supermarket chain that deliberately shows no signs in its stores (NPR, 2015): If you want to go find cookies, they won’t actually show you where the cookies are so you literally have to go through every single aisle. You have to go find it.

More choice alone did not lead to more control. In sketches of designs to undermine control, participants covered every corner of the home screen with video recommendations that scrolled infinitely (P11) and in every direction (P5). P13 described, If they didn’t have [recommended videos], it would be a lot harder to follow these different rabbit holes. I imagine that I would have to intentionally seek out another video, so I wouldn’t feel sucked in as much. Recommendations prompted a passive form of choice, in which users reacted to the app’s infinite scroll of suggestions, rather than making active choices on their own terms.

5. Overall Discussion

Together, our two studies identify design mechanisms that influence sense of agency in the YouTube mobile app and how they might be changed to increase it. In Study 1, participants reported that, in the current app, recommendations, ads, and autoplay mostly led them to feel less in control, whereas playlists, search, subscriptions, play controls, and watch history & stats mostly made them feel more in control. Across all existing mechanisms, participants felt less in control when the app took actions of its own without their consent (e.g., autoplaying a new video recommendation). Recommendations were of special concern and participants expressed frustration at their inability to customize their location, quantity, and content. In contrast, by helping participants plan ahead for even just a short while, existing mechanisms like playlists and watch stats made participants feel more in control.

When participants envisioned and evaluated changes in Study 2, they wanted more opportunities to make active choices, rather than respond to a set of choices proposed by the app. This preference was stronger when they had a specific intention in mind (e.g., to watch a certain video or topic), whereas when their intention was more general (e.g., to pass the time) they favored turning control over to YouTube.

We expect that our findings on how design mechanisms influence sense of agency on YouTube are most likely to generalize to other social media and media apps where users (a) report feeling out of control at times (e.g., Facebook (Marino et al., 2018)); and (b) use the app for both specific and non-specific intentions (e.g., Pinterest (Cheng et al., 2019)). We first discuss our findings mostly with respect to our test case of YouTube, before considering implications for digital wellbeing more broadly.

5.1. Rethinking What ‘Relevance’ Means for Recommendations

Recommendations were mentioned by participants as undermining sense of agency far more times than any other design mechanism in the YouTube mobile app, suggesting that recommender systems (Resnick and Varian, 1997) should be of central concern to digital wellbeing designers. However, they led to a reduced sense of agency via two very different routes: irrelevance and relevance.

First, recommendations were sometimes irrelevant, showing videos that participants were simply not interested in. However, due to rapid advances in artificial intelligence and recommender systems like YouTube specifically (e.g.,

(Covington et al., 2016)), one might expect recommendations in social media apps to become more and more relevant in the coming years.

Second, recommendations were sometimes too ‘relevant,’ which presents a more vexing problem from a digital wellbeing perspective. For example, participants reported that they sometimes saw too many interesting recommendations (e.g., for documentaries or for church videos late at night), which made them feel a loss of control. In this case, YouTube’s algorithm is arguably too good at a local optimization problem: Out of millions of videos, which one is the user most likely to watch? But it misses a more global optimization problem: Out of many possible actions, which one does the user most want to take? In these cases, recommendations appealed to a users’ impulse or short-term desire to watch more videos, but conflicted with their long-term goals, creating a self-control dilemma for the user (Lyngs et al., 2019; Duckworth et al., 2016).

Our findings call for rethinking what ‘relevance’ means for recommendations in the context of digital wellbeing. Prior research on recommender systems has argued that “being accurate is not enough,” as a fixation on accuracy can lead designers to ignore important facets of user experience like serendipity (McNee et al., 2006, p.1). For participants in our study, sense of agency was clearly a neglected facet of user experience, as YouTube’s recommendations led them to actions (i.e., watching more videos) they did not feel in control of. To be clear, this does not mean that Google or others should try to create an ‘algorithm for life’ that recommends between watching another video, writing a term paper, and going to sleep.

However, it does suggest that recommender systems could first start with the global problem of when to show recommendations, before moving on to the local problem of which items to recommend. For example, a decision not to show recommendations might be informed by the time of day (e.g., 2am is too late), screentime preferences (e.g., when the user has already exceeded their goal of 30-minutes per day on entertainment apps), or explicit user preferences (e.g., only show three recommendations unless I click-to-show-more). In HCI research, sometimes the implication of a user needs assessment is not to design technology, as a new technology might not be appropriate in the context of the larger situation (Baumer and Silberman, 2011). Similarly, for recommender systems, our findings suggest that sometimes the implication is not to recommend. Prior work has addressed how a system can display the level of confidence it has in its recommendations to the user (McNee et al., 2003), but this should be preceded by a more fundamental question of whether or not to show recommendations in the first place.

Whereas both of the studies in this work elicit user preferences (“what users say”), the dominant paradigm of recommender systems today, including YouTube, is behaviorism: recommendations largely neglect explicit preferences and instead rely on behavior traces (“what users do”) (Ekstrand and Willemsen, 2016). The present bias effect (O’Donoghue and Rabin, 2015) predicts that actual behavior will favor the choice that offers immediate rewards at the expense of long-term goals. In this way, recommender systems reinforce the sometimes problematic behavior of the current self rather than helping people realize their ‘aspirational self’ that reflects long-term goals (Ekstrand and Willemsen, 2016; Lyngs et al., 2018).

Participants also wanted to customize the content of recommendations, e.g., I do not want my entire screen to recommend cat videos. Today, the dominant paradigm of recommender systems, including YouTube, is behaviorism: recommendations rely on behavior traces (“what users do”) and largely neglect explicit preferences (“what users say”). In this way, recommender systems reinforce the sometimes problematic behavior of the current self rather than helping people realize their ‘aspirational self’ that reflects long-term goals (Ekstrand and Willemsen, 2016; Lyngs et al., 2018). Designers might address this by making it easier for users to (a) explicitly state preferences for topics they would like to see or not see; (b) explicitly rate recommendations (e.g., show me more like this one); (c) edit their viewing history to influence future recommendations (e.g., delete all cat videos); or (d) select an algorithmic personae to curate their recommendations (e.g., The Diplomat, who brings news videos from the other side) (Harambam et al., 2019, p.72). The current YouTube app offers limited support for these first three features (e.g., users can select from among topics for recommendations on the home page of the app), but participants in our study seemed mostly either unaware of these customization settings or found them to be inadequate.

To summarize, we encourage designers of recommender systems to think beyond just optimizing for the item that is most likely to be clicked, watched, or liked. This includes considering when to show recommendations in the first place. It also means exploring how recommendations can support user aspirations rather than just reinforce current behaviors, which requires better measures of long-term preferences. Designers and researchers should continue to explore how to personalize recommendations to satisfy these broader user needs, or provide customization options that put users in control - at least to the extent they want.

5.2. Designing to Support Microplanning

Behavior change researchers have long known that plans can help bridge the gap between intentions and behavior. In this work, plans are usually crafted in advance through careful deliberation and guide behavior for some time into the future (Agapie, 2020). For example, a screentime tool in this mold might ask the user to review and reflect upon their past usage data and develop a plan for their use over the next month. Participants in our study also ‘planned’, but they did so in a more ad hoc manner. For example, they queued videos in advance to limit what they watched during a single session or glanced at their Time watched statistics to know whether to watch another video or add it to their Watch Later playlist.

These types of actions might be called ‘microplanning,’ making lightweight plans that guide behavior for a short time, usually just a single session of use. Our naming takes inspiration from Cox et al.’s coining of the term ‘microboundary’ to describe a small obstacle prior to an interaction that prevents us rushing from one context to another, which serves as a ‘micro’ version of a commitment device that prevents the user from acting hastily and regretting it later (Cox et al., 2016). ‘Microboundary’ has helped center an important concept from behavioral economics, commitment devices that restrict future choices to reflect long-term goals (Bryan et al., 2010; Schelling, 1984), in the research and development of digital wellbeing tools, e.g., (Kim et al., 2019, 2019; Lyngs et al., 2019; Pinder et al., 2018).

Similarly, we hope that the concept of ‘microplans’ encourages the use of behavior planning knowledge in the design of digital wellbeing tools. For example, this literature finds that plans are more likely to succeed if they specify where, when, and how a behavior will be enacted (Gollwitzer and Sheeran, 2006). A microplan might incorporate just the where part, and be supported by a video playlist that is tied to a specific location, e.g., song tutorials for my guitar studio. Triggers are also a key component of effective plans (Fogg, 2009), so in this case the playlist might be the primary recommendation in the app anytime the user is within 50 meters of the studio. In another example, Hiniker et al. adapted an evidence-based Plan-Do-Review sequence (Felner et al., 1988) for an app that asked children to plan out their video-watching, finding that it helped them transition to their next activity with ease (Hiniker et al., 2017). In the domain of impulse buying (Moser et al., 2019), an e-commerce site (or third-party extension) might foreground ‘shopping list’ tools to support intentional buying.

5.3. Different Levels of Control for Ritualized and Instrumental Use

In Study 2, participants suggested ways that the YouTube mobile app might be redesigned to increase sense of agency (e.g., by reducing the number of recommendations it displays). However, such changes might lead to adverse effects as there were also times when participants preferred low-control features. Although HCI design guidelines advise supporting user sense of agency (Nielsen, 1994; Shneiderman and Plaisant, 2004), we should not assume that a greater sense of agency is always desirable.

Specifically, participants preferred higher-control mechanisms when they had a specific intention in mind and lower-control ones when they had a non-specific intention. This finding broadly aligns with two types of viewing that have been identified in uses and gratifications research on television use (Rubin, 1984): (1) ritualized use, open-ended use to gratify diversionary needs; and (2) instrumental use, goal-directed use to gratify informational needs. On this view, the current version of the YouTube app appears to offer good support for ritualized use, but poor support for instrumental use, as participants often felt that their specific intentions were hijacked by its autoplay and endless recommendations.

How might a single app support sense of agency for both ritualized and instrumental use? One approach is a customizable interface that lets the user switch between low and high levels of control. This can be done at the app-level, e.g., switching between an Explore Mode and a Focus Mode. Or it can be done at a mechanism-level, e.g., YouTube currently offers an on/off toggle for autoplay, but does not provide any way to toggle recommendations, which were the mechanism most frequently mentioned as leading to a loss of control in Study 1. This approach may be particularly suitable for power users, as prior research indicates that power users prefer interfaces that are customizable (user-tailored) by a toggle, whereas non-power users prefer ones that are personalized (system-tailored) for them (Sundar and Marathe, 2010).

A second approach then is an interface that is personalized for the user based on a prediction model. Recent work has found that classifiers can be trained to predict these types of media use with high confidence, e.g., for Pinterest

(Cheng et al., 2017) and smartphone use (Hiniker et al., 2016). For example, if YouTube expects that the user is visiting for ritualistic use, it could remain as is, or even go further to take control as in its Leanback mode for effortless viewing that autoplays a never-ending stream of high-definition recommendations (Google, ). Both our own findings on autoplay and previous work suggest that such a high level of automation would reduce sense of agency (Berberian et al., 2012), but it may still be the interface that the user prefers in this situation. Conversely, if YouTube has high confidence that the user is visiting for instrumental use, it could present a search-only interface and hide all recommendations. Finally, if it has low confidence in its prediction, it could present a middle-ground interface that shows limited recommendations, or it might err on the side of caution and lead with a search-first interface in case the user has an intention to express.

5.4. Towards a Language of Attention Capture Dark Patterns

Our findings address what and when to design to increase sense of agency. However, in the attention economy, what might motivate key stakeholders to support such designs? One step is for the design community to develop a common language of attention capture dark patterns that recognizes designs that lead to attentional harms.

Developing such a lingua franca of attention capture design patterns could be integrated into design education (Gray et al., 2018), influence designer thinking, and reputations, as is done by the name-and-shame campaign of the darkpatterns.org website (Brignull and Darlington, ). At the company level, it could help inspire products that are mindful of the user’s sense of agency. For example, in spite of the incentives of the attention economy, Apple is now working to make privacy a selling point (Hall, 2019), e.g., by preventing developers from tracking users across multiple apps without their active consent (Apple Inc, ). At the regulatory level, a recent review of dark patterns by Naraynan et al. notes that if the design community does not self-regulate by setting standards for itself, it may be regulated by more onerous standards set by others (Narayanan et al., 2020). The U.S. Senate is currently considering how to regulate social media, with one bill that would make it illegal to manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy (McKay, 2019) and another that would ban autoplay and infinite scroll (Chen, ). For designers, the language of dark patterns is an important way to contribute to a broader critical discussion of design practices in the the technology industry (Gray et al., 2018).

We caution that the message of attention capture dark patterns should not be never X, but rather be careful when X. Participants in both of our studies reported mixed experiences with many design mechanisms, including autoplay and recommendations. An outright ban on these mechanisms is likely to reduce sense of agency in a substantial number of situations where the user just wants to explore. Instead, a nuanced guide to dark patterns might present examples of the problem, followed by counterexamples where such a pattern is appropriate. While this creates a murky gray middle, it also better describes the effects of the design mechanisms that we identified in our studies.

5.5. Limitations

In addition to the previously stated limitations of our participant sampling and focus on design mechanisms as a unit of analysis, our work also has at least four conceptual limitations that could be explored in future work. First, both of our studies asked participants to share their preferences, however present bias (O’Donoghue and Rabin, 2015) predicts that actual behavior will favor the choice that offers immediate rewards at the expense of long-term goals. An in-situ study of how people respond to redesigns intended to influence sense of agency would yield results on (“what users do”), which might need to be reconciled with the present results on (“what users say”). Second, time and attention are not the only factors that influence sense of agency. By asking participants in both studies to reflect on …in control of how you spend your time on YouTube we discouraged participants from considering other factors such as privacy (Sundar and Marathe, 2010). In Study 2, this may have primed participants to focus on sense of agency over other factors when evaluating which version of the mockup they preferred. Third, self-reported agency can be quite different from the facts of agency (Coyle et al., 2012; Moore, 2016). For example, many people continue to press ‘placebo buttons’ like the ‘close door button’ in their apartment’s elevator, even when doing so has no effect (Paumgarten, 2014). There is therefore a concern that designs to increase sense of agency may be disconnected from actual ability to influence the world. Fourth, users are not the only stakeholders on YouTube, and it would be a mistake to optimize for their sense of agency alone. Google, creators, advertisers, and even society itself all have a stake in what happens on YouTube. For instance, radicalizing political videos can make viewers feel as if they have uncovered powerful conspiracies that were previously hidden from them (Roose, 2019); to support sense of agency in this use case would be dangerous. User sense of agency needs to be integrated into larger design frameworks as one important consideration among many for social media apps.

6. Conclusion

Whereas a common approach to digital wellbeing is designing to reduce screentime, this work takes an alternative approach of designing to increase sense of agency. In two studies, we identify mechanisms within the YouTube mobile app that participants report influence their sense of agency and how they want to change them. We find that participants generally prefer mechanisms like autoplay and recommendations to be redesigned for a greater sense of agency than the YouTube mobile app currently provides. For digital wellbeing designers, we highlight a need for recommender systems that better reflect user aspirations rather than just reinforce their current behavior. We also propose mechanisms that support ‘microplanning,’ making lightweight plans to guide a single session of use, to increase user sense of agency. Finally, we propose language that the design community might adopt to recognize design patterns that impose attentional harms upon the user.

Acknowledgements.
This work was funded in part by National Science Foundation award #1849955. We thank Xuecong Xu, Ming Yao Zheng, Kevin Kuo, Tejus Krishnan, Laura Meng, Linda Lai, and Stefania Druga for helping to conceptualize this study and design the mockups.

References

  • J. Aagaard (2015) Drawn to distraction: a qualitative study of off-task use of educational technology. Computers & education 87, pp. 90–97. External Links: Link, ISSN 0360-1315, Document Cited by: §2.1.
  • E. Agapie (2020) Designing for human supported Evidence-Based planning. Ph.D. Thesis, digital.lib.washington.edu. External Links: Link Cited by: §5.2.
  • M. G. Ames (2013) Managing mobile multitasking: the culture of iphones on stanford campus. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, CSCW ’13, New York, NY, USA, pp. 1487–1498. External Links: Link, ISBN 9781450313315, Document Cited by: §2.1.
  • [4] Apple Inc Details for app privacy questions now available - news - apple developer. Note: https://developer.apple.com/news/?id=hx9s63c5Accessed: 2020-9-13 External Links: Link Cited by: §5.4.
  • C. Baab-Muguira (2017) The stupidly simple productivity hack hiding in microsoft word. Fast Company. Note: https://www.fastcompany.com/3068825/the-stupidly-simple-productivity-hack-hiding-in-microsoft-wordAccessed: 2020-9-11 External Links: Link Cited by: §2.3.
  • L. Bannon, J. Bowers, P. Carstensen, J. A. Hughes, K. Kuutti, J. Pycock, T. Rodden, K. Schmidt, D. Shapiro, W. Sharrock, and Others (1994) Informing CSCW system requirements. Lancaster University. External Links: Link Cited by: §4.4.
  • E. P. S. Baumer, P. Adams, V. D. Khovanskaya, T. C. Liao, M. E. Smith, V. Schwanda Sosik, and K. Williams (2013) Limiting, leaving, and (Re)Lapsing: an exploration of facebook non-use practices and experiences. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, New York, NY, USA, pp. 3257–3266. External Links: Link, ISBN 9781450318990, Document
  • E. P. S. Baumer and M. S. Silberman (2011) When the implication is not to design (technology). In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2271–2274. External Links: Link, ISBN 9781450302289, Document Cited by: §5.1.
  • E. P. S. Baumer, R. Sun, and P. Schaedler (2018) Departing and returning: sense of agency as an organizing concept for understanding social media Non/Use transitions. Proc. ACM Hum. -Comput. Interact. 2 (CSCW), pp. 23:1–23:19. External Links: Link, ISSN 2573-0142, Document Cited by: §1.
  • B. Berberian, J. Sarrazin, P. Le Blaye, and P. Haggard (2012) Automation technology and sense of control: a window on human agency. PloS one 7 (3), pp. e34075 (en). External Links: Link, ISSN 1932-6203, Document Cited by: §2.2, §5.3.
  • R. E. Boyatzis (1998) Transforming qualitative information: thematic analysis and code development. SAGE (en). External Links: Link, ISBN 9780761909613 Cited by: §3.3.
  • V. Braun, V. Clarke, N. Hayfield, and G. Terry (2018) Thematic analysis. In Handbook of Research Methods in Health Social Sciences, P. Liamputtong (Ed.), pp. 1–18. External Links: Link, ISBN 9789811027796, Document Cited by: §3.3, §4.4.
  • V. Braun and V. Clarke (2006) Using thematic analysis in psychology. Qualitative research in psychology 3 (2), pp. 77–101. External Links: Link, ISSN 1478-0887, Document Cited by: §4.4.
  • V. Braun and V. Clarke (2019) Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 11 (4), pp. 589–597. External Links: Link, ISSN 2159-676X, Document Cited by: §4.4.
  • [15] H. Brignull and A. Darlington What are dark patterns?. Note: https://www.darkpatterns.org/Accessed: 2019-9-28 External Links: Link Cited by: §2.1, §5.4.
  • G. Bryan, D. Karlan, and S. Nelson (2010) Commitment devices. Annual review of economics 2 (1), pp. 671–698. External Links: Link, ISSN 1941-1383, Document Cited by: §5.2.
  • C. Burr, N. Cristianini, and J. Ladyman (2018) An analysis of the interaction between intelligent software agents and human users. Minds and Machines 28 (4), pp. 735–774 (en). External Links: Link, ISSN 0924-6495, Document Cited by: §1.
  • [18] calkuta DF tube (distraction free for youtube). Note: https://chrome.google.com/webstore/detail/df-tube-distraction-free/mjdepdfccjgcndkmemponafgioodelna?hl=enAccessed: 2020-8-3 External Links: Link
  • S. E. Caplan (2010) Theory and measurement of generalized problematic internet use: a two-step approach. Computers in human behavior 26 (5), pp. 1089–1097. External Links: Link, ISSN 0747-5632, Document Cited by: §1, §2.2.
  • H. Cash, C. D. Rae, A. H. Steel, and A. Winkler (2012) Internet addiction: a brief summary of research and practice. Current psychiatry reviews 8 (4), pp. 292–298 (en). External Links: Link, ISSN 1573-4005, Document Cited by: §2.2.
  • M. E. Cecchinato, A. L. Cox, and J. Bird (2017) Always on(line)?: user experience of smartwatches and their role within Multi-Device ecologies. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3557–3568. External Links: Link, ISBN 9781450346559, Document
  • [22] A. Chen A new bill would ban making social media too addictive. MIT Technology Review. External Links: Link, ISSN 0040-1692 Cited by: §5.4.
  • J. Cheng, M. Burke, and E. G. Davis (2019) Understanding perceptions of problematic facebook use: when people experience negative life impact and a lack of control. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 199. External Links: Link, ISBN 9781450359702, Document Cited by: §1, §2.2, §5.
  • J. Cheng, C. Lo, and J. Leskovec (2017) Predicting intent using activity logs: how goal specificity and temporal range affect user behavior. In Proceedings of the 26th International Conference on World Wide Web Companion, pp. 593–601. External Links: Link, ISBN 9781450349147, Document Cited by: §5.3.
  • J. E. Cohen (2012) What privacy is for. Harvard law review 126, pp. 1904. External Links: Link, ISSN 0017-811X
  • E. I. M. Collins, A. L. Cox, J. Bird, and C. Cornish-Tresstail (2014) Barriers to engagement with a personal informatics productivity tool. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design, OzCHI ’14, New York, NY, USA, pp. 370–379. External Links: Link, ISBN 9781450306539, Document Cited by: §2.3.
  • P. Covington, J. Adams, and E. Sargin (2016)

    Deep neural networks for YouTube recommendations

    .
    In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, New York, NY, USA, pp. 191–198. External Links: Link, ISBN 9781450340359, Document Cited by: §5.1.
  • A. L. Cox, S. J. J. Gould, M. E. Cecchinato, I. Iacovides, and I. Renfree (2016) Design frictions for mindful interactions: the case for microboundaries. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’16, New York, NY, USA, pp. 1389–1397. External Links: Link, ISBN 9781450340823, Document Cited by: §2.3, §5.2.
  • D. Coyle, J. Moore, P. O. Kristensson, P. Fletcher, and A. Blackwell (2012) I did that! measuring users’ experience of agency in their own actions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, New York, NY, USA, pp. 2025–2034. External Links: Link, ISBN 9781450310154, Document Cited by: §2.2, §5.5.
  • K. Davis, A. Dinhopl, and A. Hiniker (2019) “ Everything’s the phone”: understanding the phone’s supercharged role in Parent-Teen relationships. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM, pp. 227. External Links: Link Cited by: §2.2.
  • A. Dearden and J. Finlay (2006) Pattern languages in HCI: a critical review. Human–Computer Interaction 21 (1), pp. 49–102. External Links: Link, ISSN 0737-0024, Document
  • L. Delaney and L. K. Lades (2017) Present bias and everyday self-control failures: a day reconstruction study. Journal of behavioral decision making 30 (5), pp. 1157–1167. External Links: Link, ISSN 0894-3257 Cited by: §1.
  • Digital Wellness Warriors (2018) Apple: let developers help iphone users with mental wellbeing. Note: https://www.change.org/p/apple-allow-digital-wellness-developers-to-help-ios-usersAccessed: 2020-8-27 External Links: Link
  • C. Dixon (2019) Why shutter YouTube leanback when there are many potential users?. Note: https://nscreenmedia.com/why-shutter-youtube-leanback-browser-experience-now/Accessed: 2020-9-7 External Links: Link
  • A. L. Duckworth, R. E. White, A. J. Matteucci, A. Shearer, and J. J. Gross (2016) A stitch in time: strategic Self-Control in high school and college students. Journal of educational psychology 108 (3), pp. 329–341 (en). External Links: Link, ISSN 0022-0663, Document Cited by: §5.1.
  • M. D. Ekstrand and M. C. Willemsen (2016) Behaviorism is not enough: better recommendations through listening to users. In Proceedings of the 10th ACM Conference on Recommender Systems, RecSys ’16, New York, NY, USA, pp. 221–224. External Links: Link, ISBN 9781450340359, Document Cited by: §5.1, §5.1.
  • R. Felner, A. Adan, R. Price, E. L. Cowen, R. P. Lorion, and J. Ramos-McKay (1988) 14 ounces of prevention: a casebook for practitioners. Cited by: §5.2.
  • B. J. Fogg (2009) Creating persuasive technologies: an eight-step design process. In Proceedings of the 4th International Conference on Persuasive Technology, Persuasive ’09, New York, NY, USA, pp. 1–6. External Links: Link, ISBN 9781605583761, Document Cited by: §5.2.
  • D. R. Forsyth (2008) Self-serving bias. Cited by: §3.3.
  • P. M. Gollwitzer and P. Sheeran (2006) Implementation intentions and goal achievement: a meta‐analysis of effects and processes. In Advances in Experimental Social Psychology, Vol. 38, pp. 69–119. External Links: Link, Document Cited by: §5.2.
  • [41] Google YouTube leanback offers effortless viewing. Note: https://youtube.googleblog.com/2010/07/youtube-leanback-offers-effortless.htmlAccessed: 2020-9-12 External Links: Link Cited by: §5.3.
  • C. M. Gray, Y. Kou, B. Battles, J. Hoggatt, and A. L. Toombs (2018) The dark (patterns) side of UX design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, New York, NY, USA, pp. 534:1–534:14. External Links: Link, ISBN 9781450356206, Document Cited by: §2.1, §5.4.
  • Z. Hall (2019) Apple makes privacy extremely relatable in fun new iphone ad - 9to5mac. Note: https://9to5mac.com/2019/03/14/iphone-privacy-ad/Accessed: 2020-9-13 External Links: Link Cited by: §5.4.
  • J. Harambam, D. Bountouridis, M. Makhortykh, and J. van Hoboken (2019) Designing for the better by taking users into account: a qualitative evaluation of user control mechanisms in (news) recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems, RecSys ’19, New York, NY, USA, pp. 69–77. External Links: Link, ISBN 9781450362436, Document Cited by: §2.3, §5.1.
  • E. Harmon and M. Mazmanian (2013) Stories of the smartphone in everyday discourse: conflict, tension & instability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, New York, NY, USA, pp. 1051–1060. External Links: Link, ISBN 9781450318990, Document Cited by: §2.2.
  • J. Hill, K. Widdicks, and M. Hazas (2020) Mapping the scope of software interventions for moderate internet use on mobile devices. In Proceedings of the 7th International Conference on ICT for Sustainability, ICT4S2020, New York, NY, USA, pp. 204–212. External Links: Link, ISBN 9781450375955, Document
  • A. Hiniker, S. S. Heung, S. (. Hong, and J. A. Kientz (2018) Coco’s videos: an empirical investigation of Video-Player design features and children’s media use. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, New York, NY, USA, pp. 1–13. External Links: Link, ISBN 9781450356206, Document Cited by: §2.2.
  • A. Hiniker, S. (. Hong, T. Kohno, and J. A. Kientz (2016) MyTime: designing and evaluating an intervention for smartphone Non-Use. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4746–4757. External Links: Link, ISBN 9781450333627, Document Cited by: §2.1, §2.2.
  • A. Hiniker, B. Lee, K. Sobel, and E. K. Choe (2017) Plan & play: supporting intentional media use in early childhood. In Proceedings of the 2017 Conference on Interaction Design and Children, IDC ’17, New York, NY, USA, pp. 85–95. External Links: Link, ISBN 9781450349215, Document Cited by: §5.2.
  • A. Hiniker, S. N. Patel, T. Kohno, and J. A. Kientz (2016) Why would you do that? predicting the uses and gratifications behind smartphone-usage behaviors. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, pp. 634–645. External Links: Link Cited by: §5.3.
  • W. Hofmann, R. F. Baumeister, G. Förster, and K. D. Vohs (2012) Everyday temptations: an experience sampling study of desire, conflict, and self-control. Journal of personality and social psychology 102 (6), pp. 1318–1335 (en). External Links: Link, ISSN 0022-3514, 1939-1315, Document
  • R. Jääskeläinen (2010) Think-aloud protocol. Handbook of translation studies 1, pp. 371–374. External Links: Link Cited by: §4.3.1.
  • S. Jeong, H. Kim, J. Yum, and Y. Hwang (2016) What type of content are smartphone users addicted to?: SNS vs. games. Computers in human behavior 54, pp. 10–17. External Links: Link, ISSN 0747-5632, Document Cited by: §2.2.
  • A. Kamenetz (2018) The art of screen time: how your family can balance digital media and real life. Hachette UK. Cited by: §2.1.
  • J. Kim, H. Jung, M. Ko, and U. Lee (2019) GoalKeeper: exploring interaction lockout mechanisms for regulating smartphone use. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3 (1), pp. 29. External Links: Link, Document Cited by: §1, §2.3, §2.3, §5.2.
  • J. Kim, J. Park, H. Lee, M. Ko, and U. Lee (2019) LocknType: lockout task intervention for discouraging smartphone app use,“. In ACM CHI, External Links: Link, Document Cited by: §2.3, §5.2.
  • Y. Kim, J. H. Jeon, E. K. Choe, B. Lee, K. Kim, and J. Seo (2016) TimeAware: leveraging framing effects to enhance personal productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ’16, New York, NY, USA, pp. 272–283. External Links: Link, ISBN 9781450333627, Document Cited by: §1.
  • M. Ko, S. Yang, J. Lee, C. Heizmann, J. Jeong, U. Lee, D. Shin, K. Yatani, J. Song, and K. Chung (2015) NUGU: a group-based intervention app for improving Self-Regulation of limiting smartphone use. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, New York, NY, USA, pp. 1235–1245. External Links: Link, ISBN 9781450329224, Document Cited by: §2.1.
  • G. Kovacs, D. M. Gregory, Z. Ma, Z. Wu, G. Emami, J. Ray, and M. S. Bernstein (2019) Conservation of procrastination: do productivity interventions save time or just redistribute it?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 330:1–330:12. External Links: Link, ISBN 9781450359702, Document
  • J. R. Landis and G. G. Koch (1977) The measurement of observer agreement for categorical data. Biometrics 33 (1), pp. 159–174 (en). External Links: Link, ISSN 0006-341X Cited by: §3.3.
  • G. P. Latham and E. A. Locke (1991) Self-regulation through goal setting. Organizational behavior and human decision processes 50 (2), pp. 212–247. External Links: Link, ISSN 0749-5978
  • P. Lewis (2017) ’Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia. The Guardian 6 (10), pp. 2017. External Links: ISSN 0261-3077 Cited by: §1.
  • H. Limerick, D. Coyle, and J. W. Moore (2014) The experience of agency in human-computer interactions: a review. Frontiers in human neuroscience 8, pp. 643 (en). External Links: Link, ISSN 1662-5161, Document Cited by: §2.2.
  • H. Limerick, J. W. Moore, and D. Coyle (2015) Empirical evidence for a diminished sense of agency in speech interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, New York, NY, USA, pp. 3967–3970. External Links: Link, ISBN 9781450331456, Document Cited by: §2.2.
  • D. Lottridge, E. Marschner, E. Wang, M. Romanovsky, and C. Nass (2012) Browser design impacts multitasking. Proceedings of the Human Factors and Ergonomics Society … Annual Meeting Human Factors and Ergonomics Society. Meeting 56 (1), pp. 1957–1961. External Links: Link, ISSN 1541-9312, Document Cited by: §2.3.
  • K. Lukoff, A. Hiniker, C. M. Gray, A. Mathur, and S. Chivukula (2021) What can CHI do about dark patterns?. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing SystemsCHI ’21: CHI Conference on Human Factors in Computing Systems, New York, NY, USA. External Links: Link, Document Cited by: §2.1.
  • K. Lukoff, C. Yu, J. Kientz, and A. Hiniker (2018) What makes smartphone use meaningful or meaningless?. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2 (1), pp. 22:1–22:26. External Links: Link, ISSN 2474-9567, Document Cited by: §2.1, §2.2.
  • U. Lyngs, R. Binns, M. Van Kleek, et al. (2018) So, tell me what users want, what they really, really want!. Extended Abstracts of the. External Links: Link Cited by: §2.2, §5.1, §5.1.
  • U. Lyngs, K. Lukoff, P. Slovak, R. Binns, A. Slack, M. Inzlicht, M. Van Leek, and N. Shadbolt (2019) Self-Control in cyberspace: applying dual systems theory to a review of digital Self-Control tools. CHI 2019. External Links: Link, Document Cited by: §5.1, §5.2.
  • U. Lyngs, K. Lukoff, P. Slovak, W. Seymour, H. Webb, M. Jirotka, M. Van Kleek, and N. Shadbolt (2020) ’I just want to hack myself to not get distracted’: evaluating design interventions for Self-Control on facebook. External Links: Link, 2001.04180 Cited by: §2.2.
  • C. Marino, G. Gini, A. Vieno, and M. M. Spada (2018) A comprehensive meta-analysis on problematic facebook use. Computers in human behavior 83, pp. 262–277. External Links: Link, ISSN 0747-5632, Document Cited by: §1, §2.2, §5.
  • [72] V. Marotta and A. Acquisti Online distractions, website blockers, and economic productivity: a randomized field experiment.
  • L. Matney (2017) YouTube has 1.5 billion logged-in monthly users watching a ton of mobile video. TechCrunch. External Links: Link Cited by: §3.1.3, §4.2.2.
  • T. McKay (2019) Senators introduce bill to stop ’dark patterns’ huge platforms use to trick users. Note: https://gizmodo.com/senators-introduce-bill-to-stop-dark-patterns-huge-plat-1833929276Accessed: 2020-8-27 External Links: Link Cited by: §5.4.
  • S. M. McNee, S. K. Lam, C. Guetzlaff, J. A. Konstan, and J. Riedl (2003) Confidence displays and training in recommender systems. In Proc. INTERACT, Vol. 3, pp. 176–183. External Links: Link Cited by: §5.1.
  • S. M. McNee, J. Riedl, and J. A. Konstan (2006) Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI ’06 Extended Abstracts on Human Factors in Computing Systems, CHI EA ’06, New York, NY, USA, pp. 1097–1101. External Links: Link, ISBN 9781595932983, Document Cited by: §5.1.
  • J. Metcalfe and M. J. Greene (2007) Metacognition of agency. Journal of experimental psychology. General 136 (2), pp. 184–199 (en). External Links: Link, ISSN 0096-3445, 0022-1015, Document Cited by: §3.2.
  • A. Monge Roffarello and L. De Russis (2019) The race towards digital wellbeing: issues and opportunities. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 386:1–386:14. External Links: Link, ISBN 9781450359702, Document Cited by: §2.3.
  • J. W. Moore (2016) What is the sense of agency and why does it matter?. Frontiers in psychology 7, pp. 1272 (en). External Links: Link, ISSN 1664-1078, Document Cited by: §5.5.
  • C. Moser, S. Y. Schoenebeck, and P. Resnick (2019) Impulse buying: design practices and consumer needs. of the 2019 CHI Conference on …. External Links: Link Cited by: §5.2.
  • A. Narayanan, A. Mathur, M. Chetty, and M. Kshirsagar (2020) Dark patterns: past, present, and future: the evolution of tricky user interfaces. Queueing Systems. Theory and Applications 18 (2), pp. 67–92. External Links: Link, ISSN 0257-0130, 1542-7730, Document Cited by: §5.4.
  • J. Nielsen (1994)

    10 heuristics for user interface design: article by jakob nielsen

    .
    Note: https://www.nngroup.com/articles/ten-usability-heuristics/Accessed: 2020-2-7 External Links: Link Cited by: §2.2, §5.3.
  • NPR (2015) Episode 653: the Anti-Store. NPR. External Links: Link Cited by: §4.5.2.
  • T. O’Donoghue and M. Rabin (2015) Present bias: lessons learned and to be learned. The American economic review 105 (5), pp. 273–279. External Links: Link, ISSN 0002-8282 Cited by: §5.1, §5.5.
  • F. Okeke, M. Sobolev, N. Dell, and D. Estrin (2018) Good vibrations: can a digital nudge reduce digital overload?. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 4. External Links: Link, ISBN 9781450358989, Document Cited by: §2.3.
  • A. Oulasvirta, T. Rattenbury, L. Ma, and E. Raita (2012) Habits make smartphone use more pervasive. Personal and Ubiquitous Computing 16 (1), pp. 105–114 (en). External Links: Link, ISSN 0949-2054, Document Cited by: §2.1.
  • E. Pandey (2017) Sean parker: facebook was designed to exploit human “vulnerability”. Note: https://www.axios.com/sean-parker-facebook-exploits-a-vulnerability-in-humans-2507917325.htmlAccessed: 2020-9-15 External Links: Link Cited by: §2.1.
  • J. Park, J. Y. Sim, J. Kim, M. Y. Yi, and U. Lee (2018) Interaction restraint: enforcing adaptive cognitive tasks to restrain problematic user interaction. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA ’18, New York, NY, USA, pp. LBW559:1–LBW559:6. External Links: Link, ISBN 9781450356213, Document Cited by: §2.3.
  • N. Paumgarten (2014) Up and then down. The New Yorker. External Links: Link, ISSN 0028-792X Cited by: §5.5.
  • A. Perrin and M. Anderson (2019) Share of U.S. adults using social media, including facebook, is mostly unchanged since 2018. Note: https://www.pewresearch.org/fact-tank/2019/04/10/share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/Accessed: 2020-9-14 External Links: Link Cited by: §1, §2.1.
  • C. Pinder, J. Vermeulen, B. R. Cowan, et al. (2018) Digital behaviour change interventions to break and form habits. ACM Transactions on. External Links: Link Cited by: §5.2.
  • P. Resnick and H. R. Varian (1997) Recommender systems. Communications of the ACM 40 (3), pp. 56–58. External Links: Link, ISSN 0001-0782 Cited by: §5.1.
  • K. Roose (2019) The making of a YouTube radical. The New York times. External Links: Link, ISSN 0362-4331 Cited by: §5.5.
  • A. M. Rubin (1984) Ritualized and instrumental television viewing. The Journal of communication 34 (3), pp. 67–77. External Links: Link, ISSN 0021-9916, 1460-2466, Document Cited by: §5.3.
  • R. M. Ryan and E. L. Deci (2006) Self-regulation and the problem of human autonomy: does psychology need choice, self-determination, and will?. Journal of personality 74 (6), pp. 1557–1586. External Links: Link, ISSN 0022-3506 Cited by: §2.2.
  • T. C. Schelling (1984) Self-Command in practice, in policy, and in a theory of rational choice. The American economic review 74 (2), pp. 1–11. External Links: Link, ISSN 0002-8282 Cited by: §5.2.
  • M. Schlosser (2019) Agency. In The Stanford Encyclopedia of Philosophy, E. N. Zalta (Ed.), External Links: Link
  • M. Schrage (1996) Cultures of prototyping. Bringing design to software 4 (1), pp. 1–11. External Links: Link Cited by: §4.3.2.
  • N. D. Schüll (2012) Addiction by design: machine gambling in las vegas. In-Formation Series, Princeton University Press. External Links: Link, ISBN 9780691127552, LCCN 2012004339
  • N. Seaver (2018) Captivating algorithms: recommender systems as traps. Journal of Material Culture, pp. 1359183518820366. External Links: Link, ISSN 1359-1835, Document
  • B. Shneiderman and C. Plaisant (2004) Designing the user interface: strategies for effective Human-Computer interaction (4th edition). Pearson Addison Wesley. External Links: ISBN 9780321197863 Cited by: §2.2, §5.3.
  • B. Shneiderman (1992) Designing the user interface (2nd ed.): strategies for effective human-computer interaction. Addison-Wesley Longman Publishing Co., Inc., USA. External Links: Link, ISBN 9780201572865
  • [103] J. Sillito Saturate app: simple collaborative analysis. Note: http://www.saturateapp.com/Accessed: 2020-2-NA External Links: Link Cited by: §4.4.
  • L. Silver, A. Smith, C. Johnson, K. Taylor, J. Jiang, A. Monica, and L. Rainie (2019) Use of smartphones and social media is common across most emerging economies. Note: https://www.pewresearch.org/internet/2019/03/07/use-of-smartphones-and-social-media-is-common-across-most-emerging-economies/Accessed: 2019-2-NA External Links: Link Cited by: §3.1.2.
  • A. Smith, S. Toor, and P. Van Kessel (2018) Many turn to YouTube for children’s content, news, How-To lessons. Note: https://www.pewresearch.org/internet/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/Accessed: 2020-3-3 External Links: Link
  • F. F. Sniehotta, U. Scholz, and R. Schwarzer (2005) Bridging the intention–behaviour gap: planning, self-efficacy, and action control in the adoption and maintenance of physical exercise. Psychology & Health 20 (2), pp. 143–160. External Links: Link, ISSN 0887-0446, Document
  • J. E. Solsman (2018) Ever get caught in an unexpected hourlong YouTube binge? thank YouTube AI for that. CNET. Note: https://www.cnet.com/news/youtube-ces-2018-neal-mohan/Accessed: 2020-5-1 External Links: Link Cited by: §2.1.
  • [108] T. Spangler YouTube tops 20 million paying subscribers, YouTube TV has over 2 million customers. Note: https://variety.com/2020/digital/news/youtube-tops-20-million-paying-subscribers-youtube-tv-has-over-2-million-customers-1203491228/Accessed: 2020-8-26 External Links: Link Cited by: §3.1.3.
  • M. Stanphill (2019) Optimizing for engagement: understanding the use of persuasive technology on internet platforms. Note: U.S. Senate Committee Hearing
  • N. Statt (2016) Flowstate is a writing app that will delete everything if you stop typing. Note: https://www.theverge.com/2016/1/28/10853534/flowstate-writing-app-mac-ios-delete-everythingAccessed: 2020-8-10 External Links: Link Cited by: §2.3.
  • S. S. Sundar and S. S. Marathe (2010) Personalization versus customization: the importance of agency, privacy, and power usage. Human communication research 36 (3), pp. 298–322 (en). External Links: Link, ISSN 0360-3989, Document Cited by: §5.3, §5.5.
  • M. Synofzik, G. Vosgerau, and A. Newen (2008) Beyond the comparator model: a multifactorial two-step account of agency. Consciousness and cognition 17 (1), pp. 219–239 (en). External Links: Link, ISSN 1053-8100, 1090-2376, Document Cited by: §1, §2.2.
  • [113] Take control. Note: https://www.humanetech.com/take-controlAccessed: 2020-8-3 External Links: Link Cited by: §2.1.
  • A. Tapal, E. Oren, R. Dar, and B. Eitam (2017) The sense of agency scale: a measure of consciously perceived control over one’s mind, body, and the immediate environment. Frontiers in psychology 8, pp. 1552 (en). External Links: Link, ISSN 1664-1078, Document Cited by: §3.2.
  • J. A. Tran, K. S. Yang, K. Davis, and A. Hiniker (2019) Modeling the Engagement-Disengagement cycle of compulsive phone use. In CHI ’19In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), External Links: Link, Document Cited by: §2.3, §2.3, §4.3.1.
  • [116] United States Census Bureau QuickFacts: united states. External Links: Link Cited by: §3.1.2.
  • P. Van Kessel, S. Toor, and A. Smith (2019) A week in the life of popular YouTube channels. Note: https://www.pewresearch.org/internet/2019/07/25/a-week-in-the-life-of-popular-youtube-channels/Accessed: 2020-4-1 External Links: Link
  • J. Williams (2018) Stand out of our light: freedom and resistance in the attention economy. Cambridge University Press (en). External Links: Link, ISBN 9781108429092 Cited by: §1.
  • E. Y. Wu, E. Pedersen, and N. Salehi (2019) Agent, gatekeeper, drug dealer: how content creators craft algorithmic personas. Proc. ACM Hum. -Comput. Interact. 3 (CSCW), pp. 219:1–219:27. External Links: Link, ISSN 2573-0142, Document
  • [120] YouTube YouTube for press. Youtube. Note: https://www.youtube.com/about/press/Accessed: 2020-8-14 External Links: Link Cited by: §2.1, §3.1.3.
  • J. P. Zagal, S. Björk, and C. Lewis (2013) Dark patterns in the design of games. In Foundations of Digital Games 2013, (en). External Links: Link Cited by: §2.1.
  • J. Zimmerman and J. Forlizzi (2014) Research through design in HCI. In Ways of Knowing in HCI, J. S. Olson and W. A. Kellogg (Eds.), pp. 167–189. External Links: Link, ISBN 9781493903788, Document Cited by: §4.