Over the last decade, starting from the introduction of the iPhone and iOS, the adoption of mobile devices (e.g. smartphones) has been unprecedented, with billions of devices already in use. This growth has been fueled by extremely capable devices with functionality driven by rich sensing modalities, significant computational power, and most importantly millions of apps developed for them. We can now use our phones for diverse tasks well beyond communication: from controlling our smart home appliances, to entertainment and content creation, to health and fitness tracking. Recent surveys (2014) from Bank of America reported that people ranked their smartphones as more important than PCs and that a majority of people could not cope without their phone for even a day (Bank of America, 2014).
Given the importance of mobile devices and the app ecosystems around them, understanding app usage modalities is key for reasons ranging from building compelling and better apps to providing an overall improved user experience. For example, iOS Spotlight has a basic suggested app feature that works by monitoring the time and location of previous app use. Similarly, by knowing which apps or content a user might be interested in, the operating system can preload apps to reduce perceived delay (Yan et al., 2012). Additionally, if users with similar app usage patterns, as well as other contextual parameters, can be grouped together they can be recommended new apps that others in their cluster use (Xu et al., 2013).
This second use case is particularly important as there are millions of unique apps in both Apple’s App Store and Google’s Play Store, making the discovery of new apps incredibly hard for users. Similarly, recent work on managing user privacy on Android devices noted that building user profiles of “similar” users can be useful in providing recommendations to the users belonging to the same profile (Liu et al., 2016). Finally, app developers can potentially target or customize their apps for certain user types based on context, for example people using phones only during the day, at night, those who use certain types of apps (e.g. Games), or who use their devices more or less than a certain amount each day. However, for many of the use cases highlighted above we ideally need to understand whether app usage changes over time, and what factors affect it, to account for any long-term temporal effects in user behavior.
Recognizing the importance of mobile device usage, there has been a growing research interest in examining usage data from mobile devices. These include analyzing how long application sessions last (Banovic et al., 2014; Böhmer et al., 2011), the relationship between mobile app usage and mobile search functions (Carrascal and Church, 2015)
, using Markov Models to study and represent device usage(Kostakos et al., 2016), contextual factors affecting app usage (Xu et al., 2013; Yan and Chen, 2011; Do et al., 2011; Hintze et al., 2017) , the factors leading to repeated usage of an app(Jones et al., 2015), as well as emerging meta-analysis over the trials and best-practices for studying mobile device usage (Church et al., 2015). More recently, Zhao et al. analyzed app usage data for Android users over a month to show that user populations are not homogeneous and are in fact comprised of many sub-groups (Zhao et al., 2016).
In this paper, we investigate whether app usage pattern changes over time using the dataset that covers 4 years of users interacting with apps on their smart devices. While various aspects of app usage patterns have been studied for shorter time periods (e.g. up to a few months), it is not well-known how these app usage patterns change over a more extended period of time (e.g. multiple years). We analyzed the dataset collected from an ongoing long-term study (Agarwal and Hall, 2013) containing the app usage of a large number of iOS devices worldwide (). This dataset consists of fine grained app session length data (e.g. how long users used each app) from two different form factors –iPad and iPhone, over a 4-year long period from August 2012 to October 2016. To the best of our knowledge, this is one of the first studies that analyzed mobile device usage pattern at the scale of years.
We take a systematic approach towards assessing the longitudinal variability in usage pattern across several dimensions. First we analyze App usage at a population-level and at an individual user level. Our population-level analysis of app usage, studies the global demand and needs of real-world “average” users, such as which apps are popular among the entire population and how this popularity has changed over time. For our individual user-level analysis, we cannot get such popularity measures, but instead we focus on the interaction between users and their devices. For example, we explore research questions such as what is the number of apps users interact with during the course of a week? or does the amount of time a user spends on a certain kind of apps changes over time? Second, we study the longitudinal variability of app usage at two different categorization levels (individual apps & the category the app is listed under). We analyze category-level usage since the current categorization of Apps is somewhat based on high-level functionality (e.g. Games, Productivity, Travel), allowing us to highlight overall trends and how they have changed over time. We also study the longitudinal patterns of a set of top individual apps, because it allows us to look into user’s behavior in greater detail by observing their actions. For example, by looking at what kinds of apps each user keeps using over time, we can refrain from recommending new apps of similar functionalities.
Based on our longitudinal analysis of app usage data across the above dimensions, we make the following contributions:
Usage patterns at population level (§4):
Proportional app category usage over time (§4.1): We show the proportional usage across app categories varied significantly for the first half of the dataset until the end of 2014. One notable change over that first period of time is a sharp decrease of productivity apps, led by decreasing use of mail apps. After 2015, we show that proportional usage across app categories stabilizes with relatively minor shifts such as usage of Photo & video apps increasing while usage of productivity apps simultaneously decreasing. This kind of insights, along with other observations we make out of our dataset, app developers can be advised to develop apps with more visual entertainment than focusing on developing productivity apps.
App popularity (§4.2): Even though the proportional usage across app categories remained stable after 2015, the proportional usage at the granularity of individual apps changed significantly over time. Given a list of top apps, we also find that roughly between of those apps stay in that top list during the entire data collection period (Aug 2012 - Sept 2016), showing that these apps remain continuously popular. We also took a deeper look at which kinds of apps remain on the top apps for longer period of time, proving useful guidance to app developers on identifying what kinds of apps are less likely to be substituted by another new apps with similar functionality. For example, based on our analysis, Game apps typically had shorter life-time, failing to stay in the top list for an extended period of time by getting substituted by other Game apps quickly.
Usage patterns at an individual user level (§5):
Individual variability in proportional usage over time (§5.1): We find that app usage pattern of each individual is highly variable over time, with regards to their proportional usage of apps in various categories. We find that each individual’s usage pattern changes more dynamically than the global usage pattern changes, by using simple, relative metrics to compare longitudinal variability of a typical, representative user and actual users in the dataset. This individual variability may complicate building an effective app recommendation engine since frequent data collection of user app usage behavior to build accurate profiles is warranted.
Working set (§5.2): We find that the working set size of individual users is quite small. More than 90% of the iPhone users in our dataset use between 14 to 18 different apps during the course of a week (weekly working set size). In addition, we find the working set size for iPads is much smaller around 5 to 7 apps per week. For mobile platform designers and builders, these numbers provide a reference for deciding how many apps a mobile device should include in its default dashboard for better user experience.
2. Related Work
Given the growth of mobile devices over the past decade, there have been numerous research efforts looking at all aspects of usage and app ecosystems surrounding them. Some of work attempts to leverage usage pattern for specific system objectives (e.g. improving network performance, optimizing energy use, providing better recommendations). The second line of work focuses on analysis of app usage patterns to answer fundamental user-app behavior questions.
2.1. Systems Using App Usage Data
Falaki et al. in their early work collected and studied app usage, as well as network and energy consumption data from 255 users, to show that usage patterns were diverse and could be leveraged for better predicting battery drain (Falaki et al., 2010). Yan et. al combined prior app usage with location and time of the day to predict future launches and pre-load apps for responsiveness (Yan et al., 2012). Moodscope similarly used app usage with information such as phone calls, SMS, etc to predict a smartphone user’s mood (LiKamWa et al., 2013). CARAT uses crowd-sourced app usage data including CPU stats to alert users to “hogs”, apps having unexpected energy use, an important issue on resource constrained devices, allowing users to take actions such as not using some apps or changing their device configuration (Oliner et al., 2013). Several researchers have also built systems that use some combination of historical app usage (Shi and Ali, 2012), contextual information from smartphone sensors such as location and social behavior, to predicting future app usage (Xu et al., 2013) along with recommending other applications that may be relevant (Shi and Ali, 2012; Yan and Chen, 2011). In the context of mobile privacy, Liu et al. combine app usage data and privacy decisions of 72 smartphone users in a system providing a privacy “nudge” to users with similar profiles as others (Liu et al., 2016).
2.2. Studying App Usage Behavior
More directly related to our work, prior research has studied app usage on mobile devices from a behavioral perspective. Li et al. for example (Li et al., 2015) studied how users find, install, and remove apps, as well as the diversity in their network usage based on data from a Chinese app store and network usage traces for almost a million users. Xu et al. (Xu et al., 2011) similarly use network traffic measurements from a US cellular provider, to study app usage for over 600,000 smartphone users. They show spatio-temporal usage correlations of apps, diurnal patterns of some apps, and usage correlation across apps. While their dataset was large, it was only collected for a week. At a smaller scale, Bohmer et al. studied 4,000 Android users’ app usage over a 3-month period. They observed that smartphone users interacted with their device for an average of 59.23 minutes per day, with the mean app session lasting for 71.56 seconds (Böhmer et al., 2011). However, they did not study changes in app usage behavior over time. On a smaller scale, Shin et. al study 48 participants over 25 days to detect abnormal app usage from a mental health standpoint (Shin and Dey, 2013). Eagle et al. demonstrated the ability to infer a wide variety of factors, such as relationships, socially significant locations, and organizational rhythms from 100 mobile phones over a period of 9 months (Eagle and Pentland, 2006). While this dataset is certainly long enough to examine longitudinal factors, such analysis was not the focus of Eagle et al’s work. Do et al. also performed a small scale (111 participants) study based on 8-months of phone application usage to model and predict participant phone behavior (Do and Gatica-Perez, 2010). More recently, the work by Hintze et. al. (Hintze et al., 2017) showed that location context (office, home, meaningful and elsewhere) as well as temporal context have a significant correlation with the usage pattern. Interestingly, since they used a dataset (Wagner et al., 2014) with 4.5 years of data collection, they also found an evidence of longitudinal factors on usage pattern. However, they did not perform a detailed analysis on the longitudinal factors. Importantly, while they consider in their analysis that user behavior may change over time, and attempt to account for these changes, they did not perform in-depth analysis to quantify or examine to what extent this influences the usage behavior.
Researchers have also examined classifying users based on their app usage. Banovic et al.(Banovic et al., 2014) observed the interaction of 27 users with the email app to classify them into four distinct types. Jones et al. (Jones et al., 2015) identified three groups of users based on a 165 participant, 3-month long study analyzing how users revisited the same apps. While these studies into app usage are interesting, the small scale and shorter study duration is a limitation given the sheer scale of mobile app usage. More recently, a study by Zhao et al. (Zhao et al., 2016) studied a much larger dataset (n106,000) of Android users over a 1-month period to identify types of users. They identified 382 distinct user groups with notably distinct behaviors. While valuable and the basis for some of our work, their study had a relatively coarse-grained notion of usage, recording only the last ten apps used at the end of every hour, with no notion of how long those applications were used, or even launch frequency in that hour long period. Additionally, their relatively short data collection period prevented any longitudinal analysis of how users change over time (Zhao et al., 2016). In this paper, we show that these longitudinal factors are key towards understanding app usage behavior over time.
To summarize, while there have been several studies of app usage, few have had large sets of diverse users, and among those with many subjects, few have collected data over a period long enough to examine longitudinal behavior changes.
In this section, we briefly introduce our dataset, how it was acquired, and describe the various pre-processing steps we took to protect participants’ privacy while minimizing potential biases.
3.1. Overview of Dataset
To study longitudinal differences in app usage we use the dataset extracted from an ongoing long-term research study on mobile privacy (Agarwal and Hall, 2013; Chitkara et al., 2017). These longer term research studies required low-level access to the smartphone OSes, which was not possible on vanilla unmodified devices. As a result, the ProtectMyPrivacy (PmP) App was developed for users with jailbroken iOS devices (Agarwal and Hall, 2013) and was made available in the Cydia app store, popular with tens of millions of iOS users at its peak. Our dataset also thus only includes usage records from jailbroken iOS devices. We acknowledge that this may lead to potential biases, and we discuss the implications later in this section (Section 3.4).
App Session Record: An app session is the time period when the user puts a specific app in the foreground for interacting with it. In other words, an app session starts when a user clicks the app icon and ends when the user exits back to the home screen or pulls out another app on the foreground. In our dataset, each app session contains: (i) the name of the launched app, (ii) the app session start time in UTC, (iii) the timezone information of the app session (i.e. time offset from system clock), (iv) the date the app was launched and (v) the duration for which it was in the foreground. In total, we had app session records recorded between August 2012 and October 2016.
Participants: Users in our dataset are real-world iOS users who voluntarily found and installed the ProtectMyPrivacy app. Under the approval from our institution’s institutional review board (IRB), data was collected from only the users who explicitly consented inside the app.
Population: Our dataset originally contains devices, but we filtered out devices that did not send any session data or only contained invalid session data. This pre-processing resulted in devices. Among those devices, there were (79.00%) iPhones, (12.45%) iPads, and (8.56%) iPod-touch. In this paper, we focused on analyzing the usage data from the iPhones and iPads.
Note that no demographic details of any form, such as age, gender or nationality, were included in the dataset. Instead, to illustrate the geographical diversity of the users in this dataset, we extracted system timezone information included with the usage records, to show where the users are based at a coarse granularity. The numbers of devices categorized by timezone/location information in continental granularity are shown in Table 2.
|Priority||Source||# Labeled||# Remaining|
Given the scale of our dataset, including data from a variety of jail-broken OS versions and different hardware devices, it is inevitable that some of these records constitute outliers which may not indicate actual app usage. To minimize the biasing effect of these outliers, we carefully designed a preprocessing procedure to filter out outlying app session data and robustly track each user across time and devices.
In addition, since our dataset contains a large number of distinct apps not only from Apple App Store but also from other sources including AppStores for jailbroken apps (Cydia AppStore(http://www.cydiawater.com/ use-cydia-app-store-to-download-free-apps/. Accessed 11/2016, )), we also carefully designed an app-labeling procedure detailed in Section 3.3 for app-category level analysis.
3.2.1. Session Filtering
To prevent outlying app sessions from skewing our analysis, we removed all outliers (e.g. those sessions in the range of top 0.15% and bottom 0.15% of session lengths) from our analysis. In other words, we only considered the sessions that fall into the range of the center 99.7%, leading us to only consider app usage records between 0.1959 seconds and 33189.9 seconds (hours) in length. We empirically chose the lower end cuttoff (0.15% lowest session length) based on the assumption that the apps that are launched for less than 200ms are not really foreground apps and more likely a random keypress. We then chose a similar higher end cutoff (0.15% highest session lengths), noting that apps that are in the foreground for 10 hours or more are quite likely unattended demo apps without any user interactions.
3.2.2. Tracking Users Across Devices
Since we collected usage logs for an extended period of time, the dataset not only reflects each individual user’s app usage pattern change but also reflects each device’s life cycle progression. For example, users can get a new device and potentially restore their state from an iCloud/iTunes backup. Moreover, it is also possible that users get did off their used device. These progression in each device’s life cycle can potentially skew our data, because the dataset would include both the previous and new devices at the same time, regarded as two separate users in the dataset. To prevent these devices from skewing our dataset, we additionally and conservatively filtered devices that may not indicate the actual usage behavior.
Precisely tracking across multiple devices and their life cycles requires using personally identifiable information such as Apple ID. However, as noted in Section 3.1, the dataset does not collect any personally identifiable information (PII) from users. To track users in a way that does not require PII, we used a global unique identifier(GUID) generated locally on device at install time in the app preference configuration. We call this GUID as the Install ID. This install ID is automatically transferred to the new device when a user chooses to restore their own configuration or data from the previous one. We acknowledge that we cannot precisely track users across if they choose not to restore data, but we believe this procedure can alleviate the biasing effect of device life cycles.
In our dataset, we treated two devices with different hash values of device UUID as distinct devices. By default, we regard each distinct device is used by a unique owner (user) of that device. However, if two devices are of the same form factor and with the same install ID, we assumed the device is used by single user. When we analyze the data generated by those two devices, we only regarded the data from one of the devices that appeared later in time. Additionally, we also had cases where more than one install IDs were associated with single device UUID. We regarded this as the same user removing and later re-installing our PmP app on their device.
In this way, we identified 72,572 distinct users across 77,555 devices and 86,477 install IDs. We note that the overall population seems smaller now than the original 166,006 devices we started with, since the PmP app started to record install IDs in the middle of data collection period in January 2014. As a consequence, we lost a number of devices that stopped using the PmP app before this feature was introduced.
3.2.3. Identifying Active Devices/Users:
When we were performing the longitudinal analysis, we found a number of devices showing very little activity to begin with. As those devices skewed our usage pattern substantially, we additionally filtered out devices without at least one app usage every other day given the time frame. For example, if we were to filter out devices that do not seem to be active for a month, we checked whether the number of days a device had interactions exceeds the half of the days in that month.
3.3. App Labeling
Since we have a large number of unique apps () in our dataset, we categorized each app into 25 different app categories to obtain higher-level insights on usage patterns. We accessed and used the same app categories defined by the iOS App Store at the end of 2016, to coincide with the end of our dataset, to cope with changes in Apple’s categorization over time. Table 3 provides the categories for reference.
Given the large number of apps, we chose to utilize multiple sources of app labels instead of manually labeling them. We considered four different sources of app category labels: manual labeling, Apple AppStore, Cydia AppStore (an app store for jailbroken iOS devices) and meta-data residing with the app binary in each device. Because an app can have multiple labels from different sources, we carefully set up a priority based on the credibility and reliability of each source. The priority order and the number of apps labeled using each source are shown in Table 2.
To ensure the correctness of app categorization, we manually labeled top 3000 most used apps in our dataset. This is based on the observation that app usage time is extremely skewed to a relatively small set of apps (Chitkara et al., 2017). As such, the top 3000 apps we manually labeled contribute about 91.34% of the total usage time in our dataset (refer to Section 4.2 for further details). For apps that did not appear any of the label sources mentioned earlier, we put them into a separate category we named Others. These ‘other’ apps contributed only 0.46 % of app usage in the dataset.
|Category||# App||Example Apps||Category||# App||Example Apps|
|Business||16,121||Voxer, UberDriver, FedEx||Health&Fitness||11,145||Daily Yoga, Fitocracy, Fitbit|
|Weather||2,812||DarkSky, Yahoo Weather, The Weather Channel||Games||103,730||Angry Birds, Infinity Blade, Sudoku|
|Utilities||32,643||Settings, Alarm Clock, Speed Test||Finance||12,390||Virtual Wallet, PNC Mobile, Discover Mobile|
|Travel||16,492||Expedia, Southwest, Trip Adviser||Entertainment||36,428||Netflix, HBO GO, Amazon Prime Video|
|Sports||9,351||ESPN Sports, Yahoo Sports, Fox Sports||Education||30,723||iTunes U, Stack the States, Schoology|
|Social Networks||16,376||Facebook, Tumblr, OkCupid||Books||13,850||Kindle, iBooks, GoodReads|
|Reference||11,608||Wolfram Alpha, Wikipedia, Dictionary.com||Medical||6791||Epocrates, UpToDate, Stress Check|
|Productivity||15,958||Workflow, HabitList, SuperNotes||Newsstand||82||Marie Claire, Forbes Magazine, LA Times|
|Photo&Video||17,089||Camera, YouTube, PhotoVault||Catalogs||1,918||Classifieds, Tattoo Designs!, Perfumes|
|News||12,667||Apple News, Google, TechCrunch||Food&Drink||5,925||Starbucks, Wendy’s, How To Cook Everything|
|Navigation||7,605||Apple Maps, Google Maps, Waze||Shopping||2,498||Black Friday, SuperSaver, Woot|
|Music||15,747||Spotify, Pandora, Google Music||Other||52,476||VUZIQ,iPhoneus, ’F— You’|
|Lifestyle||28,648||Tinder, Catholic Calendar, Reader’s Digest|
3.4. Potential Bias in our App Usage Dataset
We recognize the fact that our dataset has been collected from users of jailbroken iOS devices and my not be a representative sample of the entire population of regular non-jailbroken users. Jailbroken users are for example likely more technical than the average smartphone user and since our dataset is from users of the app for privacy, which means that the users are more privacy conscious (Agarwal and Hall, 2013). Note, that as was the case for the original study, there is unfortunately no way to collect the app usage data that is needed for this large scale longitudinal analysis, at the granularity we need, without having a jailbroken device or otherwise modified OS. Apple has never exposed APIs to get fine grained app usage information on iOS and based on past experience has actively even removed apps that may have found indirect ways. We sincerely believe however that despite this potential bias, the dataset we have collected and analyzed is still useful to make several important observations based on overall trends of app usage across a four year data collection period. First, the Apps that these users use are primarily regular App store apps that they download from the regular App store. Second, while these users are likely more technical they are still somewhat representative of regular users who use standard apps and explore apps, switching between them and are subject to the same external signals (e.g. a new popular App being released, the Pokemon GO phase, etc) that regular non-jailbroken users are subject to that affect app usage.
4. Population-level App Usage Over Time
In this section, we showcase our longitudinal analysis on app usage pattern at the entire population level. We first discuss the longitudinal shift in proportional usage time aggregated by app category (Section 4.1). Then, we analyze the individual app level usage (Section 4.2), studying whether a typical user in our dataset spends the same amount of time on each app every month (Section 4.2.1) and whether there is any substantial changes in the top lists over time (Section 4.2.2). The first app-level analysis considers actual percentage of time a typical user spent using each app on her device. We also analyze the app popularity in terms of top most-used list to better understand the temporal dynamics of app popularity.
4.1. Proportional App Category Usage over Time
We first report on the proportional usage averaged across all months to show that the usage patterns for the iPhone and iPad were significantly different. For example, Social Networking apps contributed 29.1% of the proportional usage on the iPhone, as compared to only 7.2% of overall usage on the iPad. This is not just because iPhone has Phone and SMS apps, because Phone and SMS app itself contributes 7.3% and 3.9%, leaving 17.7% of other social networking app usage. This is still higher than the whole Social Networking app usage percentage in iPads. This resulted in Social Networking being the top most category on iPhone and the sixth most used app category on the iPad. Instead, four other app categories (i.e. Utility, Games, Entertainment and Photo&Video) showed higher proportional usage on iPad. More specifically, iPad users spent time on Utility, Games, Entertainment and Photo&Video apps for 30.5%, 17.0%, 11.8% and 8.8% of their time while iPhone users spent 19.2%, 12.9%, 5.6% and 5.1% on those apps, respectively. For Utility apps other than Safari, there was no significant difference in Utility app use (11.6% on iPads and 10.5% on iPhones). However, Safari was intensively used on iPads (18.9%), making huge difference in proportional usage of Utilities apps. On iPhone, users spend 9.5% of the time using Safari.
The average usage pattern was impacted not only by the form factor, but also by the long-term temporal context. Figure 1 shows the proportional time time an average user in our dataset used apps from each category, for both form factors. The usage pattern was split on a monthly basis over four years (August 2012 to October 2016). We found that the usage of Productivity apps (lightgreen with dot shade in Figure 1) had been notably trending down for both of the form factors. Productivity apps contributed 31.5% and 20.4% of overall app usage in August 2012 on iPhone and iPad respectively, but ended with 4.53% and 4.45% each in September 2016. In contrast, users spent more time using apps of several other categories than they had spent on them four years before. The typical example was Photo&Video apps whose usage grew from 3.2% and 6.5% in August 2012 to 10.2% and 15.6% in September 2016, respectively.
The longitudinal change in usage for Games and Entertainment apps is interesting since both showed positive growth, but the rate of growth differed on the two form factors. On the iPhone, the usage of Game Apps grew from 7.3% in August 2012 to 15.4% in September 2016. iPad usage increase was more modest: 13.6% in August 2012 and 20.0% in September 2016. In contrast, on iPhone, Entertainment apps usage increased from 4.1% in August 2012 to 4.9% in October 2016, while on the iPad the increase during the same period was more significant rising from 7.7% to 11,3%. Our data shows that the rate of usage pattern change could vary significantly between form factors, even though the trends may be similar.
Overall, we find that there is a longitudinal changes in app usage pattern in app category-level. Notably, the change was more faster until the end of 2014 and the change slowed down starting from January 2015. We also observe that users spent more time on entertainment (using Games, Photo & Video and Entertainment apps) than before, while there was a significant decline in Utility app usage. This trend was similar on both form factors, whereas they have different baseline per-category usage patterns.
4.2. App Popularity
4.2.1. App Popularity by Usage Time
To analyze the longitudinal effects on app usage over time, we considered using the app usage time per month as a feature. However, we discarded this as being infeasible due to the sheer number of apps (). Furthermore, we observed that app usage is highly skewed such that a significantly smaller subset of apps () account for a significant proportion of total app usage (46.8% on the iPad and 52.6% on the iPhone). This result indicates that users used a small set of apps extensively with a significant overlap between the working set of apps between users. It also indicates that iPhone users have more skewed usage of apps than iPad users. The process we followed to get to these popular apps is the following: (i) we calculate the top-10 most used apps on a monthly basis for both form factors; (ii) concatenate these monthly top-10 app lists, and (iii) removed duplicates so that each unique app occurs once. In doing so, we ensure that any app that was in the top-10 list at any point, for either form factor, is included.
We see a significant difference between iPad’s usage and iPhone’s usage. As expected, iPhone users spent significant proportion of their time on phone calls and using SMS. In contrast, these apps were not extensively used on iPads, since most iPads users tend not to have cellular modems.
Also, generally we observed more media-oriented usage on iPads than on iPhone. Other than Phone and SMS apps, we found more extensive use of social network apps (Facebook, WhatsApp, WeChat, Twitter, and LINE) on iPhones. Their usage is fairly stable over time, without any significant increase or decrease of usage. However, only Facebook app showed up on iPads as an extensively used app. This is because of the difference between Facebook and the other dominant Social Networking apps. Facebook is more media-oriented (capable of video/image sharing), while the other apps are more text-based (messaging). Also, we found more usage of video apps (e.g. Youtube, Netflix) and web browsers(e.g. Safari, Chrome) on iPads than on iPhone.
There were a few other interesting longitudinal changes over time. Across the two form factors, we observe a remarkable decline of Apple Mail app usage. During the same period of time, iPhones and iPads both experienced an increase in usage of Youtube apps (brown unshaded from Figure 2). In addition, we observe a significant usage of PokemonGO(red, shaded with large circles around right top corner of Figure 1) from July 2016 to September 2016. On iPhone, the app was the seventh(3.02%) most used app in August 2016. The similar pattern was found on iPad: PokemonGo was the fourth (3.07%) most used app on September 2016 on iPad.
In summary, even though we have seen stable app category-level usage pattern from the start of 2015, but our in-depth analysis on app-level usage shows a different picture of longitudinal shift.
4.2.2. App Popularity in top lists
Next we analyze longitudinal patterns of app popularity, i.e. does app popularity change over time?. The goal of this analysis is to see whether apps released later have a disadvantage in terms of gaining popularity as compared to older entrenched apps. In this section, we investigate how the set of most used apps change over time.
Longitudinally Popular Apps: We first extracted each month’s top list of apps () based on usage time. Then, we calculated the number of months each of these apps stayed in the respective top-n list throughout the timespan of our dataset. We find that a significant number of apps (10% 20% of top apps) remained in the top list for the entire date collection period (50 months). For example, in the set of the top apps for iPhone’s and iPad’s, we found that 62 iPhone apps (20.6% of the top 300 apps) and 37 iPad apps (12.3% of the top 300 apps) are consistently seen in each month’s usage, for the entire period. We see a similar pattern for the top lists. We call these apps that are seen in the top list every single month as longitudinally popular apps for the rest of this section.
Figure 4 shows the number of apps (y-axis) that stayed on the various top-n App lists, for how many months (x-axis), across our data collection period of 50 months. The apps that were present on user’s devices before our data collection started (August 2012) and remain at the end (Oct 2016) form a set of “Popular Apps”, marked on the right side of the graph. As an example, 62 apps were in the top-300 popular Apps across the entire period. In contrast, we see a steady monotonic decrease in the set of apps in the various top-n lists, as the length of time when they remain popular increases from 1 month to 50 months. This is expected, as a fewer set of apps can remain popular over longer periods of time.
Deeper look at longitudinally popular apps: Next we performed a more detailed case study on the top App list to understand the characteristics of “longitudinally popular apps”, i.e. apps that remain on the top-300 list for the entire 50 month period. We find that about one third of these apps are Apple native (pre-installed) apps, indicating their popularity on iOS. Specifically, we see 16 apps on the iPhone (32.4% of 62 Apps) and 12 apps on the iPad (25% of 37 Apps) are developed by Apple. Second, we observe that about half of these longitudinally popular apps are Utility and Social Networking apps (51.6% on iPhone and 40.5% on iPad). However, these Utility and Social Networking Apps are not as popular when we consider the entire set of apps that have appeared on the top-300 app list at least once. There were 2012 apps (iPhone) and 1556 apps (iPad) that have shown up on the list at least once over the years and 47.8% and 44.0% of these apps were apps categorized as a Game on iPhones and iPads, respectively. As such, Game apps stayed in the top-300 list, for 5.7 months(iPhone) and 4.5 months(iPad) on average, Social Networking Apps for 14.4 months (iPhone) and 14.5 months (iPad), and Utility Apps for 11.0 months (iPhone) and 11.7 months. This indicates that Social Networking and Utility apps dominate when we consider longitudinal popularity, while Game apps tend to have comparably shorter periods of popularity even though they show up on the top-300 lists in multiple months.
Monthly Change of Popular Apps: We observed earlier that about 10% - 20% of the apps appear in the top lists across our entire dataset (50 month). Next we study the month to month variation of these top- most used App lists, to understand how different, or similar, they are. Specifically, we calculate the Jaccard Similarity of each top list for consecutive months to see whether there is a significant change at the entire population level. In particular, we use the Jaccard similarity measure which is defined as where is the top list of popular apps in the -th month. If the sets of top apps in consecutive months are the same, the value is (the higher , the more similar the sets are).
When considering just the top-10 apps, we see average Jaccard similarities of 0.85 (iPhone) and 0.86 (iPad) over the 50 month period, thereby showing high overlap across consecutive months. When looking at the list of top-1000 apps, we observe a Jaccard similarity value of 0.64 (iPhone) and 0.52 (iPad) over our data collection period. We did not find any meaningful longitudinal pattern such as increasing or decreasing Jaccard similarity over time.
5. Individual-level App Usage Over Time
The above sections illustrate how our user population behaved as a whole, including how their app usage behavior changed over time. However, while potentially useful, it is important to consider the extent to which individuals varied over time. In Section 5.1, we investigate whether each individual’s usage pattern has changed over time. Then, we move on to working set analysis in Section 5.2.
5.1. Individual Usage Variance in Monthly Usage
|Average across Users||Typical User||Average across Users||Typical User|
|Avg. S.D.(%)1||Avg. Change(%)2||S.D.3(%)||Change4(%)||Avg. S.D.(%)1||Avg. Change(%)2||S.D.3(%)||Change4(%)|
|Health & Fitness||1.08||116.8||0.36||22.7||0.30||90.0||0.069||27.9|
The standard deviation of each individual’s usage over time per each category, averaged across all users.2 The average % difference of the standard deviation in usage from mean usage over time (month to month) for each category, averaged across all users. 3 The standard deviation of a typical user’s usage. 4 The typical user’s % difference of the standard deviation in usage from mean usage over time (month to month) for each category. High values highlighted in red and marked with . Low values highlighted in blue and marked with . Users had a high average variability across categories (87.7% average % change on iPhones and 98.9% average % change on iPads). This varied with categories, as variability of categories in Blue were much lower than those in Red. In general, the app categories with more usage had higher standard deviation and lower average change over time. Also, we observe that a typical user’s variability over time is way lower than that of individual users in the dataset, indicating that individual’s longitudinal variability extends beyond just the typical usage change. This high degree of longitudinal variability was observed across form factors.
To investigate each individual’s usage pattern, we extracted the proportional time each user spent on each of the app categories every month. We then calculated the standard deviation and the average of these proportions per app category across time. We utilized two different metrics to assess longitudinal variability: the native standard deviation as an objective metric of the average variability of a device across time as well as the proportion of the standard deviation to the mean (Std-Dev/Mean*100). The latter metric, which we will call “% change from mean” henceforth in this section, is an indicator of the relative scale of the variability to the overall baseline usage. For example, if “% change from mean“ of an app category usage is higher than that of other over time, it means that the usage pattern of that app category has more temporal variability than the other app category’s usage. Importantly, we only consider data for those users with more than one month of data in our dataset, as a Std-Dev with 1 data point does not provide meaningful insight and could potentially skew our results.
We then average these two metrics across all of the users, to assess the average level of each individual device’s variability across our dataset. Additionally, we represented the results with the standard deviation and % change value of an average user for reference, which is already shown in a form of breakdowns in Figure 1.
The results of this analysis can be seen in Table 4. Across all categories, users had an average standard deviation of 1.67% and 2.18%, with an average ratio between standard deviation and mean (Std-Dev/Mean *100) of 87.7% and 98.9% on iPhones and iPads, respectively. These numbers are roughly or more than twice the standard deviation and % change from mean value of a typical user shown in Figure 1. This means that individual user’s usage pattern changes more dramatically than it does at the population level. In other words, if each individual’s longitudinal app usage pattern has changed over time at the same rate of the global change, standard deviation and change from mean values would have been on a similar level. This leads us to a deeper insight into each individual’s usage pattern over time: if we draw a graph similar to Figure 1 based on each individual’s usage behavior, we would end up with a graph with larger longitudinal variation across time.
Examining the results in Table 4
in further detail, we found that app categories with more usage (i.e. Social Networking, Utility, Productivity) have larger variance over time, while others with less usage such as Medical, Newsstand, Catalogs and Food&Drink have smaller variance over time. This is expected because the standard deviation of a data reflects the scale of the original data points. However, despite their high level of standard deviation in its absolute value, Utility and Social Networking apps on iPhone have the lowest % change from mean. This indicates their usage is relatively stable over time, although they still suffer from longitudinal variation of 40.2%(Social Networking) and 39.8%(Utility) from mean on iPhone.
This pattern was observed both as an individual user and an average user. However, we could not find the similar pattern on iPads from each individual’s usage; the % change from mean of the top most used app categories on iPad–Utility, Games and Entertainment–were not the lowest ones, rather being the ones around the average. This implies that there are differences in the magnitude of longitudinal changes among different form factors and app categories.
5.2. Working Set of Individual Users
As we previously found that the list of popular apps remain similar across the entire data collection period in Section 4.2.2, we next study whether the cause for this is the relative stability of the set of apps individual users use over time.
Extracting Working Set Size from Users: We defined the working set of a user for a certain time period as: the list of apps a user launches during that given period. Since selecting a time period for this analysis is important, we chose to use a week since it covers both diurnal and weekday/weekend variations, and is also not too long. To calculate the working set of each user over time, we first extract the list of apps they launch, for every week period (Sunday to Saturday). Then, for each app in this users usage data, we calculate the number of weeks it appeared in the weekly working sets, and divide that by the total number of weeks that this user user remained in our dataset. In doing so, we calculate a probability of the app being on the each weekly working set. Note, for this analysis we filtered out data for users who were present in our dataset for a period less than 10 weeks, because a user who was present for too small number of weeks may skew our working set analysis.
Working Set Size of Users: We depicted the number of apps that appeared with a probability higher than in Figure 4. For example, for , the figure plots a CDF of the number of apps that are launched at least 90% of the weeks the user is in our dataset as a function of the working set. A higher leads to a smaller working set, for example for the working set is defined as the list of apps that are used every week. ‘For this case of , the figure shows that around 78.5 % of users used less than or equal to 10 apps as their working set, and about 91.2 % of users used less than or equal to 14 apps on iPhones. If we relax the constraint to or , we have one or two more apps in the working set, which means the working set has been already sufficiently covered by criteria. We conclude that each user’s weekly working set lies around 14 to 18, covering roughly more than 90% of users working set.
We started our analysis with a question: does app usage change over time? We organized our analysis in combinations of two different dimensions of granularity. In this section, we discuss the implications for researchers looking into mobile device user behavior, as well as app developers, in addition to providing recommendations based on our results.
6.1. User’s App Usage Changes over Time
6.1.1. Changes in Entire Population Level
The key results of our paper come from the length of the dataset we used for analysis, and thus our ability to study longitudinal changes. Our results indicate that mobile app usage does change significantly over time, first observing a longitudinal variability in average mobile usage in terms of per-app usage and per-category usage. These results demonstrate that looking at a population during a short period misses important variables in how app usage changes, and that expanding the research/testing period for mobile app usage behavior might be beneficial.
6.1.2. Changes in Individual User Level
Our analysis of changes in individual app usage behavior month by month showed high ratios (%) between the standard deviation of usage and mean usage, as well as wide variance in the standard deviation of usage by app category. Interestingly, this high variance over time was not evident when examining the results at the entire population level, since while there were some changes (notably between Games and Productivity apps), our results showed that population usage was fairly constant.
The high average variance on a monthly basis suggests that results which analyze user behavior over a shorter period may not be indicative of long-term user behavior, necessitating longer studies. Additionally, the breakdown of average standard deviation available in Table 4 may prove useful to both researchers and app developers, depending upon the category of app usage being considered. A longer study may be appropriate when there is high standard deviation, while a shorter study may be sufficient for categories with relatively small standard deviations. Depending on the purpose and accuracy required of a study, extrapolating long-term conclusions from short-term studies will not yield accurate long-term results.
6.2. Working Set of Apps & Longitudinally Popular Apps
Out analysis on top most used app list provides suggests that we have roughly 10% 20% of those apps stay in that top list for the whole period. This can potentially provide a useful reference for optimizing systems and infrastructures around mobile ecosystem, because it implies that they can consider caching the contents related to those popular apps.
Our analysis shows that users have a small working set of apps (18 apps) they occasionally launch over the course of a week. This finding can be leveraged by engineers or designers of mobile systems, because knowing how large user’s working set size is can help them determining how many apps they will place by default in app dashboards or shortcut panels on mobile user interfaces.
6.3. Impact of Form Factor on Device Usage
Our longitudinal app usage analysis revealed that people use iPads and iPhones differently. Even though this is not surprising, rather expected, we quantitatively substantiated this common belief using a fine-grained dataset at scale. We also provide a series of detailed analysis around the differences between the two form factors, which we believe will be a good reference for mobile researchers and developers.
More importantly, one of the major findings is that iPads and iPhones show different responses to longitudinal changes. This implies that researchers should model device usage differently based on form factor in the longitudinal context. For example, a huge longitudinal growth of a certain app category usage on iPhone would not necessarily imply the same amount of impact on iPads. A study that aims to predict or investigate app usage over time should split its target population based on the form factor, since parameters derived from an aggregated population of iPads and iPhone users may lead to an in accurate results. Because Android devices have much more diversity in its form factors (e.g. they have a wider diversity in screen sizes and resolutions), similar studies on Android devices should be conducted more carefully to respond to this difference.
In this paper, we focused on exploring whether app usage has changed over time from various points of view. We evaluated the longitudinal effects on app usage as well as app category level usage. Furthermore, we explored whether app usage in the entire population level and individual user level changes over time.
In summary, we find that both app-level and app category-level usage pattern changes over time across the entire dataset, reflecting a longitudinal shift of users’ demand on smart devices. Along with the global change in typical app usage, we also find that each individual’s usage pattern also changes over time, having higher variability than that of a typical user in the dataset. In addition, we also observed that users keep a small set of weekly working set of apps. Finally, we find that there is a subset of longitudinally popular apps in top list and the list remains quite similar across time.
The mobile device world is constantly evolving and studying mobile device usage behaviors will be increasingly challenging as devices become more powerful. For example, background use of navigation and split-screen multitasking on larger displays allow users to use several apps simultaneously to perform unique tasks. The paper contributes to this growing research area, which will help shape the systems and data analysis techniques of the future.
We thank our anonymous reviewers for the feedback and their helpful comments. This work was supported in part by National Science Foundation CSR-1526237, TWC-1564009 and the DARPA Brandeis Program. We would also like to acknowledge the Scott Institute at Carnegie Mellon University and Google for their various gifts supporting this research.
- ProtectMyPrivacy: detecting and mitigating privacy leaks on iOS devices using crowdsourcing. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services, pp. 97–110. External Links: Cited by: §1, §3.1, §3.4.
- Trends in Consumer Mobility Report. External Links: Cited by: §1.
- ProactiveTasks: The Short of Mobile Device Use Sessions. In Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices & Services, New York, NY, USA, pp. 243–252. External Links: Cited by: §1, §2.2.
- Falling asleep with Angry Birds, Facebook and Kindle: a large scale study on mobile application usage. In Proceedings of the 13th international conference on Human computer interaction with mobile devices and services, pp. 47–56. External Links: Cited by: §1, §2.2.
- An in-situ study of mobile app & mobile search interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2739–2748. External Links: Cited by: §1.
- Does this app really need my location?: context-aware privacy management for smartphones. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1 (3), pp. 42:1–42:22. External Links: Cited by: §3.1, §3.3.
- Understanding the Challenges of Mobile Phone Usage Data. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, New York, NY, USA, pp. 504–514. External Links: Cited by: §1.
- Smartphone usage in the wild: a large-scale analysis of applications and context. In Proceedings of the 13th international conference on multimodal interfaces, pp. 353–360. Cited by: §1.
- By their apps you shall understand them: mining large-scale patterns of mobile phone usage. In Proceedings of the 9th international conference on mobile and ubiquitous multimedia, pp. 27. External Links: Cited by: §2.2.
- Reality mining: sensing complex social systems. Personal and ubiquitous computing 10 (4), pp. 255–268. External Links: Cited by: §2.2.
- Diversity in Smartphone Usage. In Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services, New York, NY, USA, pp. 179–194. External Links: Cited by: §2.1.
- A large-scale, long-term analysis of mobile device usage characteristics. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1 (2), pp. 13:1–13:21. External Links: Cited by: §1, §2.2.
-  External Links: Cited by: §3.2.
- Revisitation Analysis of Smartphone App Use. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, NY, USA, pp. 1197–1208. External Links: Cited by: §1, §2.2.
- Modelling smartphone usage: a markov state transition model. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 486–497. External Links: Cited by: §1.
- Characterizing Smartphone Usage Patterns from Millions of Android Users. In Proceedings of the 2015 ACM Conference on Internet Measurement Conference, New York, NY, USA, pp. 459–472. External Links: Cited by: §2.2.
- MoodScope: building a mood sensor from smartphone usage patterns. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services, pp. 389–402. External Links: Cited by: §2.1.
- Follow My Recommendations: A Personalized Assistant for Mobile App Permissions. In Twelfth Symposium on Usable Privacy and Security (SOUPS 2016), External Links: Cited by: §1, §2.1.
- Carat: collaborative energy diagnosis for mobile devices. In Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems, pp. 10. Cited by: §2.1.
- Getjar mobile application recommendations with very sparse datasets. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 204–212. Cited by: §2.1.
- Automatically Detecting Problematic Use of Smartphones. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, NY, USA, pp. 335–344. External Links: Cited by: §2.2.
- Device analyzer: understanding smartphone usage. In Mobile and Ubiquitous Systems: Computing, Networking, and Services: 10th International Conference, MOBIQUITOUS 2013, Tokyo, Japan, December 2-4, 2013, Revised Selected Papers, I. Stojmenovic, Z. Cheng, and S. Guo (Eds.), pp. 195–208. External Links: Cited by: §2.2.
- Identifying Diverse Usage Behaviors of Smartphone Apps. In Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement Conference, New York, NY, USA, pp. 329–344. External Links: Cited by: §2.2.
- Preference, context and communities: a multi-faceted approach to predicting smartphone app usage patterns. In Proceedings of the 2013 International Symposium on Wearable Computers, pp. 69–76. External Links: Cited by: §1, §1, §2.1.
- AppJoy: personalized mobile application discovery. In Proceedings of the 9th international conference on Mobile systems, applications, and services, pp. 113–126. Cited by: §1, §2.1.
- Fast app launching for mobile devices using predictive user context. In Proceedings of the 10th international conference on Mobile systems, applications, and services, pp. 113–126. External Links: Cited by: §1, §2.1.
- Discovering Different Kinds of Smartphone Users Through Their Application Usage Behaviors. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, New York, NY, USA, pp. 498–509. External Links: Cited by: §1, §2.2.