DeepAI
Log In Sign Up

AppBuddy: Learning to Accomplish Tasks in Mobile Apps via Reinforcement Learning

05/31/2021
by   Maayan Shvo, et al.
31

Human beings, even small children, quickly become adept at figuring out how to use applications on their mobile devices. Learning to use a new app is often achieved via trial-and-error, accelerated by transfer of knowledge from past experiences with like apps. The prospect of building a smarter smartphone - one that can learn how to achieve tasks using mobile apps - is tantalizing. In this paper we explore the use of Reinforcement Learning (RL) with the goal of advancing this aspiration. We introduce an RL-based framework for learning to accomplish tasks in mobile apps. RL agents are provided with states derived from the underlying representation of on-screen elements, and rewards that are based on progress made in the task. Agents can interact with screen elements by tapping or typing. Our experimental results, over a number of mobile apps, show that RL agents can learn to accomplish multi-step tasks, as well as achieve modest generalization across different apps. More generally, we develop a platform which addresses several engineering challenges to enable an effective RL training environment. Our AppBuddy platform is compatible with OpenAI Gym and includes a suite of mobile apps and benchmark tasks that supports a diversity of RL research in the mobile app setting.

READ FULL TEXT VIEW PDF
05/27/2021

AndroidEnv: A Reinforcement Learning Platform for Android

We introduce AndroidEnv, an open-source platform for Reinforcement Learn...
09/14/2021

The Impact of User Demographics and Task Types on Cross-App Mobile Search

Recent developments in the mobile app industry have resulted in various ...
05/06/2018

Target Apps Selection: Towards a Unified Search Framework for Mobile Devices

With the recent growth of conversational systems and intelligent assista...
01/07/2021

Deep Reinforcement Learning for Black-Box Testing of Android Apps

The state space of Android apps is huge and its thorough exploration dur...
08/19/2022

Universally Adaptive Cross-Platform Reinforcement Learning Testing via GUI Image Understanding

With the rapid development of the Internet, more and more applications (...
12/27/2022

A systematic literature review on the development and use of mobile learning (web) apps by early adopters

Surveys in mobile learning developed so far have analysed in a global wa...
09/20/2022

Synapse: Interactive Guidance by Demonstration with Trial-and-Error Support for Older Adults to Use Smartphone Apps

As smartphones are widely adopted, mobile applications (apps) are emergi...

Code Repositories

1. Introduction

Billions of people around the world use mobile apps on a daily basis to accomplish a wide variety of tasks. Building smarter smartphones that can learn how to use apps to accomplish tasks has the potential to greatly improve app accessibility and user experience. We explore the use of Reinforcement Learning (RL) to advance this aspiration.

RL has been applied in a diversity of simulated environments with impressive results [lillicrap2015continuous, mnih2015human, silver2016mastering, tessler2017deep]. However, a myriad of challenges can prohibit the application of RL in real-world settings [dulac2020empirical]. Learning to accomplish tasks in mobile apps is one such setting due to the usually large action space (the agent can interact with many elements on every screen), sparse rewards, and slow interaction with the environment (i.e., a physical phone or an emulator) that together make the collection of a large number of experience samples both necessary and arduous.

Recent work proposed to use supervised learning techniques to train computational agents to accomplish tasks in mobile apps

[li2020mapping]. A shortcoming of this approach is that it requires the creation of large training sets of labeled data. In contrast, RL agents can learn to solve tasks without human supervision by autonomously learning from interacting with mobile apps, and they can potentially learn better solutions than supervised learning approaches, as shown by previous work (e.g., [silver2017mastering]). Closer to our work are recent efforts that use RL to solve tasks using web interfaces [shi2017world, liu2018reinforcement, gur2018learning, jia2019dom] or that test mobile apps [koroglu2018qbe, degott2019learning, pan2020reinforcement]. We discuss these works in Section 3.

In this paper, we explore whether an RL agent can learn policies that consistently solve tasks in real-world mobile phone apps. Our main contributions are as follows:

  • [noitemsep]

  • We formulate the app learning task as an RL problem where the state and action space is derived from the phone’s internal representation of screen elements and reward is modeled so as to incentivize intermediate task steps, while learning policies that complete tasks.

  • We construct a mobile app learning environment that is engineered to collect experiences from multiple emulators simultaneously. The environment is made compatible with OpenAI Gym [brockman2016openai] to support various RL algorithms. We also build several tools for efficient provisioning of Android emulators, obtaining emulator states, and interacting with the emulators.

  • We experimentally evaluate our RL agent on a suite of benchmarks comprising a number of apps and tasks of varying difficulty. Results (i) demonstrate that RL agents can be successfully trained to accomplish multi-step tasks in mobile apps; (ii) expose the impact of design decisions including reward modeling and number of phone emulators used in training; and (iii) demonstrate the ability of our approach to generalize to similar tasks in unseen apps.

  • We develop the AppBuddy training platform that includes the aforementioned mobile app learning environment together with a suite of mobile app-based benchmarks, allowing researchers and practitioners to train RL agents to accomplish tasks using various apps.

This paper represents an important step towards endowing smartphones with the ability to learn to accomplish tasks using mobile apps. The release of the AppBuddy training platform and suite of benchmarks opens the door to further work on this impactful problem by the broader research community.

2. Preliminaries

We begin by defining the relevant terminology regarding MDPs and Reinforcement Learning. We then describe the Proximal Policy Optimization algorithm, which we use in our experiments.

2.1. Reinforcement Learning (RL)

RL agents learn optimal behaviour by interacting with an environment [sutton2018reinforcement]. The environment is usually modelled as a Markov Decision Process (MDP). An MDP is a tuple , where is a finite set of states, is a finite set of actions, is the reward function, is the

transition probability distribution

, and is the discount factor. At each time step , the agent is in a state and selects an action according to a policy

. A policy is a probability distribution over the possible actions given a state. The agent executes action

in the environment and, in response, the environment returns the next state and an immediate reward . The process then repeats from . The agent’s objective is to find an optimal policy . This is a policy that maximizes the expected discounted future reward from every state .

The value function is the expected discounted future reward of following policy starting from state . It can be defined recursively as follows:

2.2. Proximal Policy Optimization (PPO)

Proximal Policy Optimization (PPO) [schulman2017proximal]

is a policy gradient method that uses a state-approximation technique (usually a deep neural network) with parameters

to estimate a policy

and its value function .

PPO then iteratively updates the parameters searching for a better policy (i.e., a policy that collects more reward). To do so, it first collects experiences by running agents in parallel for some fixed number of steps. Each agent collects experiences by sampling actions from the stochastic policy . Then, all those experiences are gathered together and become a training set that PPO uses to improve its current policy . This process then repeats.

To update the parameters

, PPO uses a loss function that considers three terms. The first term is an entropy bonus that discourages

from becoming a deterministic policy (which is useful for exploration purposes). The second term is the square error between and for all in the training set. Note that, for each state , we can compute an empirical target for its value function estimation using the rewards that the agent collected from on:

The final term looks to improve the current policy . To do so, the key concept is the advantage estimation . Let be the action that an agent selected from state at time , then its advantage estimation is defined as follows . This is the difference between how much empirical reward the agent received by executing from state and how much reward the agent was expecting to get from state . Intuitively, if (the agent got more reward than expected), PPO will try to increase the probability of selection action in (and decrease it otherwise). Concretely, this final term is defined as follows:

where , is usually set to 0.2, and . In this case, denotes the parameters before the update (i.e., is the policy that collected the experiences) and denotes the parameters after the update. The function discourages PPO from making large changes to the current policy (which is relevant for theoretical reasons [schulman2015trust]).

3. Related Work

Related to our work, RL has been applied to learning web-based tasks [shi2017world, liu2018reinforcement, gur2018learning, jia2019dom]. As in our setting, learning to accomplish tasks on the web suffers from sparse reward. In several of these approaches, a Document Object Model (DOM) representation of the current HTML page is used as part of the RL agent state, not unlike our use of view hierarchies. While our work shares some of its motivation with this body of work, the expensive and slow interaction with the mobile app environment, forced a different approach to the problem.

Also related, is a body of work that has applied RL to testing mobile apps (e.g., [koroglu2019reinforcement, pan2020reinforcement]). Similarly to our work, this body of work leverages the underlying representation of apps to facilitate RL training in the mobile setting, however their endeavor is fundamentally different. They are training and rewarding RL agents to explore and crash apps (e.g., by identifying valid interactions with on-screen elements or maximizing code coverage by reaching novel states) for testing purposes, rather than rewarding agents for accomplishing sparsely rewarded multi-step tasks, as we do in this work. While koroglu2019reinforcement specify concrete test scenarios, their approach uses tabular RL and terminates the learning process when the agent first finds a sequence of actions that satisfies the test scenario koroglu2019reinforcement. In contrast, here we leverage deep RL to learn policies that consistently accomplish a variety of tasks in mobile apps.

Recent work by li2020mapping proposed to use supervised learning techniques to learn how to map natural language instructions to user interface (UI) elements on the screen of a smartphone li2020mapping. They adopted crowdsourcing to annotate a dataset coupling natural language instructions (obtained by crawling the web) and corresponding UI elements. Using that training data, they learn a model that maps instructions into sequences of UI elements to interact with in order to complete a task. The model assumes that previous steps have been executed correctly when running multi-step tasks. If, for example, the model selects an incorrect UI element at some point along the sequence, then the next prediction will likely fail because the phone is now in an unexpected state. While our work shares its motivation (and some functionality) with li2020mapping’s work, we take a different computational approach, namely RL, that does not require extensive annotations of a large dataset. Rather, our RL agent explores mobile apps and receives rewards from the environments that guide it towards accomplishing tasks.

4. System Design

Our objective is to explore whether an RL agent can learn to accomplish tasks in mobile apps by interacting with (either a physical or an emulated) phone environment. Mobile apps are interesting and challenging real-world RL benchmarks. Since they are optimized for accessibility, most tasks can be solved after executing a short sequence of actions. However, the branching factor (i.e., action space) is much larger than in a standard RL benchmark, which leads to a very sparse reward signal. Moreover, interacting with Android emulators is slow. These ingredients make mobile apps a challenging benchmark with interesting structure that, hopefully, RL agents will learn to exploit when solving these problems.

In this section, we present the basic building blocks to use RL in mobile apps: action space, state representation and reward specification. The overview of the system is shown in Figure 1. The agent (a neural network trained using PPO) interacts with several Android emulators in parallel to collect experiences. At each step, for each emulator, the agent chooses to tap on (or type into) a particular element on screen. After the agent performs the actions, the environments return states derived from the available on-screen information, together with rewards that depend on the current task the agent is solving.

Some tasks might require typing some particular text. For instance, the task “add a new Wi-Fi network named Starbucks" will require the agent to type Starbucks at some point. However, it is practically impossible for an RL agent to discover that typing Starbucks, letter by letter, will cause it to receive a reward. To handle this issue, we follow the same standard as in the web-based task literature shi2017world and provide the agent with a list of tokens to choose from when typing text. With that, we now describe the action space, state representation, and rewards used in our work.

Figure 1. Overview of the proposed RL framework. The environment contains a group of emulators and the states are derived from the view hierarchy of the current screen. In each step, the agent will choose an action, which consists of an element ID and a token, and receive corresponding rewards based on progress made in the task.

4.1. Action Space

The main challenge towards successfully applying RL in smartphones is the unreasonably large action space. Technically, a user might tap on any pixel on the screen. But allowing that level of granularity would make learning infeasible. Fortunately, we can reduce the action space to the set of elements on screen by exploiting the information from the view hierarchy222The view hierarchy is similar to the DOM tree for web pages..

In Android applications, each view is associated with a rectangle on the screen. The view is responsible both for displaying the pixels on the screen as well as handling events in that rectangle. All the views on a particular screen are organized in a hierarchical tree structure, which is also called the view hierarchy. We show a simple example of a view hierarchy in Figure 2. In this figure, the left part shows a screenshot of the native Android alarm clock app. On the right, we can see part of the view hierarchy at the top and detailed attributes for the selected view called ‘ImageButton {Add alarm}’ at the bottom. The selected view is also highlighted in a red rectangle in the screenshot on the left.

Figure 2. An example of the view hierarchy for a given screen. The ‘+’ button with the red border on the left-hand side directly corresponds to the highlighted element (‘Add alarm’) in the view hierarchy on the right hand side.

Since the view hierarchy is always available for any Android application, we use its information to define the action space. From this hierarchy, we automatically extract the list of user interface (UI) elements on the screen. Then, the actions at the agent’s avail are tuples comprising a UI element index (w.r.t. that list) and a token (as shown in Figure 1). The element index ranges from 0 to -, where is a fixed upper bound for the maximum number of elements on any screen. We say that a UI element is clickable if it reacts when tapped and that a UI element is editable if text may be typed into it. When the agent chooses to interact with some UI element, the agent will type into that element if the element is editable and tap on that element if the element is clickable.

In addition to selecting a UI element index, the agent also chooses a single token from a set of tokens (in our experiments, is 4) that are predefined for each task. For instance, if the task is “add a new Wi-Fi network named Starbucks," then Starbucks will be among the tokens for that task. Then, the chosen token is typed into the selected UI element if that element is editable. Note that it is also possible to train the agent to select correct tokens from natural language commands (e.g., as is done by jia2019dom jia2019dom). Finally, while we only consider in this work two types of actions – tapping and typing – future work will explore action types such as swiping and long tapping.

4.2. State Representation

As explained above, the action space is defined by the list of UI elements in the current screen, which is extracted from the screen’s view hierarchy. To represent the state, we use the same list of UI elements. Concretely, the state is represented by an

matrix, where each row is a vector of

features representing a particular UI element from the list and is an upper bound for the maximum number of elements that we expect to see on any screen. In the matrix, each UI element is represented using features. These features include the textual description of the UI element (which is embedded using a pretrained BERT model devlin2018bert). This description is available in the view hierarchy and specifies the purpose of the UI element (e.g., ‘Add alarm’ for the ‘+’ button in Figure 2). We also include information about whether the UI element is clickable or editable and its relative location in the view hierarchy (defined as the element’s pre-order tree traversal index). The relative location features help capture the spatial correlations across different UI elements. The process of extracting these features from the view hierarchy is illustrated in Figure 3. In our experiments, is 871, i.e., 768 (BERT) + 3 (clickable/editable) + 100 (location in the view hierarchy).

Note that we order the state representation to match the actions in the sense that selecting action means interacting with the UI element represented by the -th row of the feature matrix. Also, there are cases where the current screen has fewer than elements. In those cases, we fill the remaining rows of the feature matrix with zeros and, if the agent chooses to interact with nonexistent UI elements, a no-op action is performed (i.e., an action that does not change the environment).

Figure 3. Each element is represented in our state representation by features derived from the view hierarchy of the current screen.

4.3. Reward Specification

When learning to accomplish tasks in mobile apps, reward is extremely sparse if the agent is only given a positive reward when accomplishing the task and a reward of 0, otherwise. To mitigate for reward sparsity, we specify the reward function such that the agent is given an intermediate reward for reaching certain states in the app that correspond to sub-goals on the way to accomplishing the task. is calculated as follows:

(1)

where indicates whether the agent has reached a certain state and represents the rewards. is the total number of intermediate steps where the agent will receive positive intermediate rewards. The value of is task-specific. Note that an intermediate reward may only be given once in an episode and is set to 0 after that. is the reward returned by the environment when the task is complete (based on the view hierarchy extracted from the emulator).333We also tried using reward shaping ng1999policy to generate intermediate rewards but the results were inferior.

For example, in order to accomplish a task in the settings app in our benchmarks, an agent must add a new Wi-Fi network named ‘Starbucks’. when the agent reaches the ‘Wi-Fi settings’ screen and when the agent reaches the ‘add new Wi-Fi network’ screen. if the agent has added a new Wi-Fi network called ‘Starbucks’ and it now appears on the screen. In the next section, we discuss our experiments where we show the impact of excluding intermediate rewards (i.e., where only and, hence, the agent receives a sparse reward for accomplishing the task).

5. Experimental Evaluation

The objectives of our evaluation were: 1) to show that RL can be used to learn policies that accomplish tasks in mobile apps; 2) to expose the impact of design decisions including reward modeling and number of phone emulators used in training; and 3) to demonstrate our approach’s potential for generalization to similar tasks in unseen apps.

Task Steps Policy Updates Task Steps Policy Updates
Settings - Easy 1 10 Alarm - Easy 3 25
Settings - Medium 2 25 Alarm - Medium 6 50
Settings - Hard 3 25 Alarm - Hard 9 75
Split - Easy 4 25 Shopping - Easy 2 25
Split - Medium 8 50 Shopping - Medium 4 30
Split - Hard 13 75 Shopping - Hard 6 50
Table 1. For each task, steps is the minimum number of steps needed to complete the task.

5.1. Experimental Setup

We ran experiments using PPO2 (an implementation of PPO made for GPU baselines). The emulators were provisioned using Docker-Android docker-android

with headless mode and KVM acceleration enabled. With this setup, we were able to train the agent with tens of emulators on a single machine. Below, we provide a high-level description of the domains, hyperparameters, experimental protocol, and evaluation metric. Further details are in Appendix A.


Benchmarks. We experiment with 4 mobile apps, where each app includes 3 tasks of varying difficulty. Descriptions of the tasks and the intermediate rewards can be found in Table 2 and Appendix A, respectively.

  • [noitemsep]

  • Expense splitting app444Obtained from https://bit.ly/3bmhhQf: create groups of people with whom to split various expenses.

  • Shopping list app555Obtained from https://bit.ly/3hVANEr: create new checklists, add items to a list, check items off, and remove items from a list.

  • Alarm clock app666Obtained from https://bit.ly/3nngfpu: add, remove or modify alarms.

  • Android networks and Internet settings: add new wireless networks, configure mobile network settings, disable/enable airplane mode.

The expense splitting, shopping, and alarm clock apps are open source and come from F-Droid (

https://f-droid.org), which is a repository of open source Android apps, while the settings app comes with the Android OS. Table 1 summarizes the tasks and the minimum number of steps required to accomplish each task.

Task Task Description
Settings - Easy Navigate to Wi-Fi settings screen
Settings - Medium Navigate to add new Wi-Fi network screen
Settings - Hard Navigate to add new Wi-Fi network screen and add a new network called Starbucks
Split - Easy Create a new expense splitting group
Split - Medium Create a new expense splitting group and add a new member to it
Split - Hard Create a new expense splitting group, add a new member to it, and create a new expense
Alarm - Easy Set one alarm clock
Alarm - Medium Set two alarm clocks
Alarm - Hard Set three alarm clocks
Shopping - Easy Add a new item to the default list
Shopping - Medium Create a new list
Shopping - Hard Create a new list and add an item to it
Table 2. Descriptions of the app-based tasks used in our experiments.

Baselines. We show empirically how, by adjusting various knobs, the training process for learning policies to accomplish multi-step tasks in our benchmarks is made easier or harder. More specifically, we compare different configurations of our approach along a number of dimensions.

  • [noitemsep]

  • Number of emulators: we compare between 3 and 35 emulators (and environments in PPO2).

  • Reward specification: we compare between a reward specification that includes intermediate rewards and one that does not. For the latter case, are always set to 0 in Equation 1.

  • Episode length: we compare between resetting the environment after 25 and 40 steps.

In each of the comparisons, only one knob is changed and the rest stay fixed. The ‘vanilla’ configuration includes 35 emulators, intermediate rewards, and an episode length of 25.

Experiment Protocol. In each experiment, we ran PPO2 for some number of policy updates and evaluated the agent’s current policy after each update (the number of policy updates per task are listed in Table 1). To evaluate the policy, we estimated its success rate by running the policy 100 times and counting how many times the policy was able to accomplish the task within 25 steps.

Note that, during training, whenever an episode ends (after 25 or 40 steps or when the task has been accomplished), we reset the emulator by returning the app to a state that is identical to the state of the app immediately following its installation. This hard reset ensures that the agent always has to solve the task from scratch (and cannot take advantage of any progress made in previous episodes).

5.2. Results

Figure 4.

The success rate of a trained policy (percentage of successful test episodes) as a function of the number of policy updates in various tasks. In Figures (a)-(c), we compare between 3 and 35 emulators. In Figures (d)-(f), we compare between 25 and 40 steps per episode In Figures (g)-(i), we compare between a reward specification with and without intermediate rewards. In all the figures, error bars correspond to a 90% confidence interval. Figures 6-8 (Appendix B) show the results for the remainder of the tasks.

Figure 4 shows the success rate of a trained policy as a function of the number of policy updates on various tasks. Each row compares between different baselines. Moreover, we compare to a random baseline that returns a uniformly sampled action at each step. For example, in Figure 4, after 10 policy updates, the trained policy achieves a success rate of approximately 0.1 and 1 with 3 and 35 emulators used in training, respectively. Moreover, in Figure 4, the random baseline achieves a success rate of 0.16. As is evident by Figure 4, in most tasks the random baseline was unable to accomplish the task at all.

Number of emulators. Between every two policy updates, 3 emulators gather less experience than 35 emulators. The plots in Figures 4-4 and Appendix B reflect this and show that, on average, the 3 emulator baseline required more policy updates to achieve a high success rate, compared to the 35 emulator baseline. Moreover, in some of the harder tasks (e.g., Figures 6(b),6(c), and 6(f) in Appendix B), the 3 emulator baseline was unable to accomplish the task at all.

Reward specification. Figures 4-4 and Appendix B show that providing the agent with intermediate rewards was crucial in learning to accomplish many of the tasks. In fact, the baseline that receives no intermediate rewards achieves a success rate of zero in most of the tasks.

Episode length. The plots in Figures 4-4 and Appendix B show that in most tasks, the 25-step and 40-step baselines perform similarly. However, there are cases where only the 40-step baseline manages to accomplish the task, such as in Figure 7(h) in Appendix B.

5.3. Applying Learned Skills to Unseen Apps

Our results indicate that RL can be used to learn policies that accomplish tasks in mobile apps. However, deploying this approach on real phones is still a ways away. In particular, imagine a human user of an alarm clock app. After learning how to use alarm clock app A, the user will typically have an easier time learning how to use alarm clock app B, assuming some similarity between the apps. We would like an RL agent to possess similar capabilities – learn a policy that accomplishes a task on one app and then use that trained policy to accomplish a similar task on a different, yet similar app.

To experiment with this idea, we begin with a naive approach: we take a policy trained on the easy alarm clock task in our experiments and deploy this trained policy in a different app – the native Android alarm clock app. The unseen native app has many commonalities to the training app, both in form and in function, however, this naive approach failed to accomplish the same task (i.e., setting an alarm clock) in the unseen app, likely because the RL agent did not learn to focus on the appropriate features and instead memorized the element ID and token combination that should be chosen in each state.

To remedy this, we shuffled the on-screen UI elements in the state representation given to the agent during training. In this way, the agent can no longer simply memorize the element ID that should be selected and must, instead, learn and attend to the relevant features of each element in the state representation (e.g., the BERT-embedded textual description). However, this approach also failed because the information encoded by the view hierarchy for each element was not sufficiently similar between the two apps. For example, while the accessibility text accompanying the ‘+’ button in the native alarm clock app reads add alarm (see the blue button in Figure 2 and the value of the content-desc attribute in the view hierarchy), there is no accessibility text accompanying the corresponding ‘+’ button in the open source app. This discrepancy is due to the developers of each app. We hypothesized that augmenting the accessibility text for the relevant elements in the open source alarm clock app (on which the agent is trained) with text that is similar to the accessibility text accompanying the corresponding elements of the unseen app, will help the agent generalize to the unseen app.

Generalization Experiments. To test our hypothesis, we trained a policy on the open source alarm clock app while also shuffling the state representation, which now included relevant text in the accessibility field, as described above. We compared between the baseline used in our experiments (without shuffling the state representation) and a baseline where the state representation is shuffled and augmented with relevant text. Figure 5 compares between these two baselines on the unseen app in terms of success rate achieved as a function of the number of policy updates. Figure 5 compares between these two baselines on the training app in terms of success rate achieved as a function of the number of policy updates. Figure 5 shows that training an RL agent in the training app without shuffling, unsurprisingly, achieves a high success rate after fewer policy updates, compared to an RL agent that is given a shuffled state representation. However, as shown by Figure 5, only by shuffling the state representation and forcing the agent to pay attention to the features, it was able to generalize to a similar task in the unseen alarm clock app.

Figure 5. The success rate of a trained policy (percentage of successful test episodes) as a function of the number of policy updates. The plots compare between the baseline used in our experiments (i.e., w/o shuffling) and a baseline where the state representation given to the agent is shuffled and augmented with relevant text, on the unseen, test app (left) and on the training app (right).

5.4. Discussion

Reward specification. In our experiments, PPO2 needed intermediate rewards in order to solve most of our tasks. This is unfortunate because it increases the complexity of programming reward functions. We encourage future work to explore how to learn policies without using intermediate rewards or how to generate intermediate rewards automatically. For example, previous work has proposed to learn structured representations of reward functions from experience and has shown that these representations can be used to effectively solve partially observable RL problems toro2019learning.

Training efficiency. Interacting with Android emulators is slow and it will be important to consider how to speed up such interactions. Interestingly, the bottleneck in our training was the resetting of the environment - the emulator. In the presence of multiple emulators running on the same machine, parallel resets often caused issues (e.g., the machine was temporarily unable to query an emulator to obtain the current view hierarchy and derive the current state from it). To mitigate for this, we cached the initial view hierarchy of each app, which allowed us to avoid querying the emulator during the reset phase.

Partial observability. Our state representation is derived from the information available from the current screen. However, that information alone might not be enough to determine the optimal action. For instance, the task “add one alarm at 7 am and another at 7 pm" (our alarm-medium task) becomes partially observable because, when the agent is in the screen for setting a new alarm, it does not have information about whether it had already set the other alarm. As such, providing some form of memory to the agent will be key to solving harder tasks in mobile apps.

Generalization. Our results on generalization (Section 5.3) show that RL agents can learn policies that generalize to unseen (but similar) apps. However, we had to manually add meaningful textual descriptions to the view hierarchy to do so. Fortunately, adding such descriptions can be done automatically by using a widget captioning technique li2020widget. This is a promising direction for future work.

Beyond View Hierarchies.

In our work, we exclusively derived our state representation from the view hierarchy and ignored visual information from the screen. While this was sufficient for solving the wide variety of problems in our benchmarks, we believe that considering the pixel information is an interesting avenue for future work. For example, in many cases, salient information is missing from the view hierarchy and can only be extracted from the screen’s pixels. Previous work leveraged computer vision techniques (e.g., object detection and optical character recognition) to derive information from a smartphone screen, without relying on the underlying view hierarchy

sereshkeh2020vasta.

6. Concluding Remarks

Building smarter smartphones has the potential to broaden accessibility of phone applications and improve the user experience. In this paper, we explored the use of RL to learn to accomplish tasks using mobile apps. RL agent states were derived from the underlying representation of on-screen elements, rewards were based on progress made in the task, and agents could interact with elements on the phone screen by tapping or typing. Our experiments showed that our RL agents could learn to accomplish multi-step tasks in a number of mobile apps, as well as achieve modest generalization across different mobile apps. An important contribution of our work was the development of a mobile app RL environment that is compatible with OpenAI Gym and its provision through the AppBuddy training platform. The release of this training platform together with a suite of benchmarks opens the door to further research into learning to accomplish a diversity of tasks using mobile apps.

[heading=subbibintoc]

AppBuddy: Learning to Accomplish Tasks in

Mobile Apps via Reinforcement Learning

Technical Appendix

Appendix A Experimental Setup

Hardware Specification
Our workstation, used both for training and testing, has the Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz with 20 CPU cores. The total amount of memory is 503 GB. The workstation also has 8 NVIDIA Tesla M40 GPUs, each with 24 GB of GPU memory. The operating system on the workstation is CentOS 7.0.

Hyperparameters
The following hyperparameters are used in all experiments presented in this paper. Hyperparameters not listed here were assigned the default values used in baselines. For the policy network, we used a fully-connected MLP with 3 hidden layers of 1024 units, and tanh nonlinearities. , the fixed number of on-screen elements for our state representation, was set empirically to 20.

Hyperparameter Value

Number of epochs

4
Learning rate
Minibatch size # of emulators (3 or 35)
Discount () 0.99
VF coefficient 0.5
Clipping range 0.2
Entropy coefficient 0.01
Table 3. Hyperparameters for PPO2 used in our experiments.

Intermediate Rewards used in the Tasks

The following describes the intermediate rewards given to the agent in each task.

  • Settings - Easy

    • No intermediate rewards

  • Settings - Medium

    • The agent is rewarded after navigating to Wi-Fi settings screen

  • Settings - Hard

    • The agent is rewarded after navigating to Wi-Fi settings screen and after navigating to add new Wi-Fi network screen

  • Split - Easy

    • The agent is rewarded after navigating to the ‘create new group’ screen and after entering the correct token into the ‘new group name’ editable field

  • Split - Medium

    • The agent is rewarded with the intermediate rewards of the easy split task. The agent is also rewarded when it successfully creates a new group with the correct name, after navigating to the group screen, after navigating to the ‘add new member’ screen, and after entering the correct token into the ‘new member name’ editable field

  • Split - Hard

    • The agent is rewarded with the intermediate rewards of the easy and medium split tasks. The agent is also rewarded when it successfully adds a member to the group with the correct name, after navigating to the ‘add new expense’ screen, after entering either the correct amount of money or the correct expense name, and after entering both the correct amount of money or the correct expense name (in this case, the agent gets a slightly larger reward)

  • Alarm - Easy

    • No intermediate rewards

  • Alarm - Medium

    • The agent is rewarded when first alarm clock is set properly

  • Alarm - Hard

    • The agent is rewarded when first alarm clock is set, when second alarm is set, when third alarm is set, and is rewarded a larger intermediate reward when any two alarm clocks are set

  • Shopping - Easy

    • No intermediate rewards

  • Shopping - Medium

    • The agent is rewarded when navigating to ‘more options’ screen, and when navigating to ‘add new list’ screen

  • Shopping - Hard

    • The agent is rewarded when navigating to ‘more options’ screen, when navigating to ‘add new list’ screen, and when successfully adding a new list with the correct name

Appendix B Additional Results

Here we include additional results from our experiments.

b.1. Multi-step task learning

Figures 6-8 show additional results comparing each two baselines on each task in our suite of benchmarks.

Figure 6. The success rate of a trained policy (percentage of successful test episodes) as a function of the number of policy updates in various tasks. Comparison between 3 and 35 emulators.
Figure 7. The success rate of a trained policy (percentage of successful test episodes) as a function of the number of policy updates in various tasks. Comparison between 25 and 40 steps per episode.
Figure 8. The success rate of a trained policy (percentage of successful test episodes) for different reward specification methods in various tasks. Comparison between a reward specification with and without intermediate rewards.