Data-Efficient Reinforcement Learning for Malaria Control

05/04/2021 ∙ by Lixin Zou, et al. ∙ Association for Computing Machinery Michigan State University 0

Sequential decision-making under cost-sensitive tasks is prohibitively daunting, especially for the problem that has a significant impact on people's daily lives, such as malaria control, treatment recommendation. The main challenge faced by policymakers is to learn a policy from scratch by interacting with a complex environment in a few trials. This work introduces a practical, data-efficient policy learning method, named Variance-Bonus Monte Carlo Tree Search (VB-MCTS), which can copy with very little data and facilitate learning from scratch in only a few trials. Specifically, the solution is a model-based reinforcement learning method. To avoid model bias, we apply Gaussian Process (GP) regression to estimate the transitions explicitly. With the GP world model, we propose a variance-bonus reward to measure the uncertainty about the world. Adding the reward to the planning with MCTS can result in more efficient and effective exploration. Furthermore, the derived polynomial sample complexity indicates that VB-MCTS is sample efficient. Finally, outstanding performance on a competitive world-level RL competition and extensive experimental results verify its advantage over the state-of-the-art on the challenging malaria control task.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Malaria is a mosquito-borne disease that continues to pose a heavy burden on South Sahara Africa (SSA) [moran2007malaria]. Recently, there has been significant progress in improving treatment efficiency and reducing the mortality rate of malaria. Unfortunately, due to financial constraints, the policymakers face the challenge of ensuring continued success in disease control with insufficient resources. To make intelligent decisions, learning control policies over the years have been formulated as Reinforcement Learning (RL) problems [bent2018novel]. Nevertheless, applying RL to malaria control seems to be a tricky issue since RL usually requires numerous trial-and-error searches to learn from scratch. Unlike simulation-based games, e.g., Atari games [mnih2013playing] and game GO [silver2017mastering], the endless intervention trial is unacceptable to regions over the years since the actual cost of life and money is enormous. Hence, as in many human-in-loop systems [zou2019longterm, zou2020neural, zou2020pseudo], it is too expensive to apply RL to learn malaria intervention policy from scratch directly.

Therefore, to reduce the heavy burden of malaria in SSA, it is urgent to improve the data efficiency of policy learning. In [bent2018novel]

, novel exploration techniques, such as Genetic Algorithm 

[holland1992genetic], Batch Policy Gradient [sutton2000policy] and Upper/Lower Confidence Bound [auer2010ucb], have been firstly applied to learn malaria control policies from scratch. However, these solutions are introduced under the Stochastic Multi-Armed Bandit (SMAB) setting, which myopically ignores the delayed impact of the interventions in the future and might result in serious problems. For example, the large-scale use of spraying may lead to mosquito resistance and bring about the uncontrolled spread of malaria in the coming years. Hence, it requires us to optimize disease control policies in the long run, which is far more challenging than the exploration in SMAB.

Considering the long-term effects, the finite horizon continuous-space Markov Decision Process is employed to model the disease control in this work. Under this setting, we propose a framework named Variance-Bonus Monte Carlo Tree Search (VB-MCTS) for data-efficient policy searching, illustrated in Figure 

1. Particularly, it is a model-based training framework, which iterates between updating the world model and collecting data. In model training, Gaussian Process (GP) is used to approximate the state transition function with collected rollouts. As a non-parametric probabilistic model, GP can avoid the model bias and explicitly model the uncertainty about the transitions, i.e.,  the variance of the state. In data collection, we propose to employ MCTS for generating the policy with the mean MDP plus variance-bonus reward. The variance-bonus reward can decrease the uncertainty at the state-action pairs with high potential reward by explicitly motivating the agent to sample the state-actions with the highest upper-bounded reward. Furthermore, the sample complexity of the proposed method indicates that it is a PAC optimal exploration solution for malaria control. Finally, to verify the effectiveness of our policy search solution, extensive experiments are conducted on the malaria control simulators111https://github.com/IBM/ushiriki-policy-engine-library [bent2018novel], which are gym-like222https://gym.openai.com environments for KDD Cup 2019 [zhou2020kdd]. The outstanding performance on the competition and extensive experimental results demonstrated that our approach could achieve unprecedented data efficiency on malaria control compared to the state-of-the-art methods.

Figure 1: Overview of the proposed learning system on Malaria Control. The system alternates model training and data collection.

Our main contributions are: (1) We propose a highly data-efficient learning framework for malaria control. Under the framework, the policymakers can successfully learn control policies from scratch within 20 rollouts. (2) We derive the sample complexity of the proposed method and verify that VB-MCTS is an efficient PAC-MDP algorithm. (3) Extensive experiments conducted on malaria control demonstrate that our solution can outperform the state-of-the-art methods.

2 Related Work

As a highly pathogenic disease, malaria has been widely studied from the perspective of predicting the disease spread, diagnosis, and personalized care planning. However, rarely work focuses on applying RL to learn the cost-effectiveness intervention strategies, which plays a crucial role in controlling the spread of malaria [moran2007malaria]. In [bent2018novel], malaria control has firstly been formulated as a stochastic multi-armed bandit (SMAB) problem and solved with novel exploration techniques. Nevertheless, SMAB based solutions only myopically maximize instant rewards, and the ignorance of delayed influences might result in disease outbreaks in the future. Therefore, in this work, a comprehensive solution has been proposed to facilitate policy learning in a few trials under the setting of finite-horizon MDP.

Another topic is data-efficient RL. To increase the data efficiency, we are required to extract more information from available trials [deisenroth2011pilco], which involves utilizing the samples in the most efficient way (e.g., exploitation) and choosing the samples with more information (e.g., exploration). Generally, for exploitation, model-based methods [ha2018recurrent, kamthe2017data] are more sample efficient but require more computation time for the planning. Model-free [szita2006learning, krause2016cma, van2009theoretical] methods are generally computationally light and can be applied without a planner, but need (sometimes exponentially) more samples, and are usually not efficient PAC-MDP algorithm [strehl2009reinforcement]. For exploration, there are two options: (1) Bayesian approaches, considering a distribution over possible models and acting to maximize expected reward; unluckily, it is intractable for all but very restricted cases, such as the linear policy assumption in PILCO [deisenroth2011pilco]. (2) intrinsically motivated exploration, implicitly negotiating the exploration/exploitation dilemma by always exploiting a modified reward for directly accomplishing exploration. However, on the one hand, the vast majority of papers only address the discrete state case, providing incremental improvements on the complexity bounds, such as  MMDP-RB [sorg2012variance], metric-E [kakade2003exploration], and BED [kolter2009near]. On the other hand, for more realistic continuous state space MDP, over-exploration has been introduced for achieving polynomial sample complexity in many work, such as KWIK [li2011knows], and GP-Rmax [grande2014sample]. These methods will explore all regions equally until the reward function is highly accurate everywhere. By drawing on the strength of existing methods, our solution is a model-based RL framework, which efficiently plans with MCTS and trade-off exploitation and exploration by exploiting a variance-bonus reward.

3 Proposed Method: VB-MCTS

3.1 Malaria Control as MDP

Finding an optimal malaria control policy can be posted as a reinforcement learning task by illustrating it as a Markov Decision Process (MDP). Specifically, we formulate the task as a finite-horizon MDP, defined by the tuple with as the potential infinite state space, as the finite set of actions, as the deterministic transition function, as the reward function, and as the discount factor. In this case, we face the challenge of developing an efficient policy for a population over a 5 year intervention time frame. As shown in Figure 2, the corresponding components in malaria control are defined as,

Figure 2: Malaria intervention as MDP.

Action

The actions are the available means of interventions, including the mass-distribution of long-lasting insecticide-treated nets (ITNs) and indoor residual spraying (IRS) with pyrethroids in SSA [stuckey2014modeling]. In this work, the action space is constructed through with , which represent the population coverage for ITNs and IRS of a specific area. Without significantly affecting performance, we discrete the action space with an accuracy of 0.1 for simplicity.

Reward

The reward is a scalar , associated with next state . In malaria control, it is determined through an economic cost-effectiveness analysis. In [bent2018novel], an overview of the reward calculation is specified. Without loss of generality, the reward function is assumed known to us since an MDP with unknown rewards and unknown transitions can be represented as an MDP with known rewards and unknown transitions by adding additional states to the system.

State

The state contains important observations for decision making in every time step. In malaria control, it includes the number of life with disability, life expectancy, life lost, and treatment expenses [bent2018novel]. We set the state in the form , including current reward , previous action and the current intervention timestamp, which covers the most crucial and useful observations for malaria control, as shown in Figure 2. For the start state , the reward and action are initialized with 0.

Let denote a deterministic mapping from states to actions, and let denote the expected discounted reward by following policy in state . The objective is to find a deterministic policy that maximizes the expected return at state as

3.2 Model-based Indirect Policy Search

In the following, we detail the key components of the proposed framework VB-MCTS, including the world model, the planner, and variance-bonus reward with its sample complexity.

World Model Learning

The probabilistic world model is implemented as a GP, where we use a predefined feature mapping as training input and the target state as the training target. The GP yields one-step predictions

where is the kernel function,

denotes the vector of covariances between the test point and all training points with

, and is the corresponding training targets. is the noise variance. is the Gram matrix with entries .

Throughout this paper, we consider a prior mean function and a squared exponential (SE) kernel with automatic relevance determination. The SE covariance function is defined as

where is the variance of state transition and . The characteristic length-scale controls the importance of -th feature. Given training inputs and the corresponding targets , the posterior hyper-parameters of GP (length-scales and signal variance ) are determined through evidence maximization technique [williams2006gaussian].

Exploration with Variance-Bonus Reward

The algorithm we propose is itself very straightforward and similar to many previously proposed exploration heuristics 

[kolter2009near, srinivas2009gaussian, sorg2012variance, grande2014computationally]. We call the algorithm Variance-Bonus Reward, since it chooses action according to the current mean estimation of the reward plus an additional variance-based reward bonus for state-actions that have not been well explored as

where and are the parameters that trade-off the balance of exploitation and exploration. and are the predicted variances for state and reward. The variance of reward can be exactly computed following the law of iterated variances.

Planning with Mean MDP + Reward Bonus

MCTS is a strikingly successful planning algorithm [silver2017mastering], which can find out the optimal solution with enough computation resources. Since disease control is not a real-time task, we propose to apply MCTS (Figure 1) as the planner for generating the policy to maximize the variance-bonus reward. In the executing process, the MCTS planner incrementally builds an asymmetric search tree guided to the most promising direction by a tree policy. This process usually consists of four phases — selection, expansion, evaluation, and backup (as shown in Figure 3).

Specifically, each edge of the search tree stores an average action value and visit count . In the selection phase, starting from the root state, the tree is traversed by simulation (that is, descending the tree with the mean prediction of states without backup). At each time step of each simulation, an action is selected from state

so as maximize action value plus a bonus that decays with repeated visits to encourage exploration for tree search. Here, is the constant determining the level of exploration. When the traversal reaches a leaf node at step , the leaf node may be expanded with each edge initialized as , , and the corresponding leaf nodes are initialized with the mean GP prediction . Then, the leaf node is evaluated by the average outcome of rollouts, which played out until terminal step using the fast rollout policies, such as random policy and greedy policy. At the end of the simulation, the action values and visit counts of all traversed edges are updated, i.e., backup. Each edge accumulates the visit counts and means the evaluation of all simulations passing through that edge as

where with is the edge in the forward trace.

Figure 3: An illustration of MCTS for policy generation.

This cycle of selection, evaluation, expansion and backup is repeated until the maximum iteration number has been reached. At this point, the best action has been chosen by selecting the action that leads to the highest reward (max child) as follows

(1)

where is the policy generated by MCTS. To be noticed, the variance-bonus reward can be replaced with the for generating the best-known policy since it maximizes the expected reward based on the posterior so far.

Input: Hype-parameters , maximum trials number .
Output: The malaria control policy .
# Initialization.
Generate random policy .
Apply policy in real world and collect training samples .
Update GP world model with .
Set number of trials .
for  do
        Initialize time step , and state .
        for  do
               # Data Collection.
               Generate action by MCTS with variance-bonus reward .
               Apply in real world and collect .
               Update training samples .
               # Model Training.
               Update GP world model with .
               Update , and .
              
        end for
       Increase the training trials .
end for
# Generate final decision.
Initialize time step , state , and policy .
for  do
        Generate action by MCTS with mean reward .
        Predict with Gaussian Process.
        Update , and .
        Append action to final decision .
end for
Return the malaria control policy .
ALGORITHM 1 Variance-Bonus Monte Carlo Tree Search

Finally, we implement an iterative training procedure, as shown in Algorithm 1, where we specify the order in which each component occurs within the iteration.

3.3 Sample Complexity

In the following, we derive sample complexity (e.g., the required samples to learn near-optimal performances) for our proposed solution. In the worst case, when the reward function is equal everywhere, and all state-action pairs will be equally explored, VB-MCTS will have the same complexity bounds as over-exploration methods, such as GP-Rmax  [grande2014sample]. In practice, VB-MCTS will learn the optimal policy in fewer steps than the over-exploration methods because the high uncertainty region with low reward is not worth exploration. Following the general PAC-MDP theorem, Theorem 10 in [strehl2009reinforcement], we derive the polynomial sample complexity of VB-MCTS.

Theorem 1.

Assume that the feature space of state-action pairs is a compact domain and the target value is bounded . The reward and action value are Lipschitz continuous w.r.t the state-action pairs and let and be the Lipschitz constants for reward and action value respectively. If VB-MCTS is executed with and for any MDP

, then with probability

, VB-MCTS will follow a -optimal policy from its current state on all but

(2)

timesteps, with probability at least , where , . is the covering number of domain , which defines the cardinality of the minimal set where is a distance measure.

Sketch of Proof.

The proof is organized by showing the required key properties (optimism, accuracy, and learning complexity) for general PAC-MDP [strehl2009reinforcement] are satisfied.

Optimism

Assume that MCTS can find the optimal decision under every state with enough computation resources. Then, , the optimal action value satisfies

where , . The maximum error of propagating instead of is given by . The norm is upper bounded by the regularization error with probability of  (Section A.2 in  [grande2014computationally]). In the same way, we have with probability . It follows that Algorithm 1 remains optimistic with probability at least 1-2.

Accuracy

Following the Lemma 1 from [grande2014sample], the prediction error at is bounded in probability if the predictive variance of the GP at .

Learning complexity

(a) For with , let , , then can be rewritten as with . If , the posterior variance of satisfies . (b) Since is a compact domain with the length of -th dimension as , we have and can be covered by balls , which centers at with a radius of .

Given (a) and (b), , s.t. . If there are at least observations in , then (2) satisfied. Combining the Lemma 8 from [strehl2009reinforcement] and denoting , the total number of updates occurs will be bounded by with probability of .

Now that the key properties of optimism, accuracy, and learning complexity have been established, the general PAC-MDP Theorem 10 of [strehl2009reinforcement] is invoked. ∎

Theorem 1 allows us to guarantee that the number of steps in which the performance of VB-MCTS is significantly worse than that of an optimal policy starting from the current state is at most log-linear in the covering number of the state-action space with probability .

4 Experiments

In this section, we conduct extensive experiments on two different OpenMalaria [smith2008towards] based simulators: SeqDecChallenge and ProveChallenge, which are the testing environments used in KDD Cup 2019. In these simulators, the parameters of the simulator are hidden from the RL agents since the “simulation parameters” for SSA are unknown for policymakers. Additionally, to simulate the real disease control problem, only 20 trials were allowed before generating the final decision, which is much more challenging than traditional RL tasks. These simulating environments are available at https://github.com/IBM/ushiriki-policy-engine-library.

Agents for Comparison

To show the advantage of VB-MCTS, many benchmarking reinforcement learning methods and open source solutions have been deployed to verify its effectiveness:

Random Policy: The random policy is executed in 20 trials and chooses the generated policy with the maximum reward as the final decision. SMAB

: This kind of policy treats the problem as a Stochastic Multi-Armed Bandit problem and independently optimizes the policy every year with Thompson sampling 

[chapelle2011empirical]. CEM: Cross-Entropy Method is a simple gradient-free policy searching method [szita2006learning]. CMA-ES: CMA-ES is a gradient-free evolutionary approach to optimizing non-convex objective functions [krause2016cma]. Q-learning-GA: It learns the malaria control policy by combining Q-learning and Genetic Algorithm. Expected-Sarsa: It collects 13 random episodes and runs expected value SARSA [van2009theoretical] for 7 episodes to improve the best policy using the collected statistics. GP-Rmax: It uses GP learners to model and and replaces the value of any where is ”unknown” with a value of  [li2011knows, grande2014sample]. GP-MC: It employs Gaussian Process to regress the world model. The policy is generated by sampling from the posterior and choosing the max rewarded action. VB-MCTS: Our proposed method.

Implementation Details

We build a feature map of 14-dimension for this task, which includes the periodic feature and cross term feature. Specifically, the feature map is set as,

where are the periodic features and is the cross term feature. Since the predicted variance of state and reward are the same in our setting, we empirically set the sum of exploration/exploitation parameters as in the experiments. In MCTS, is 5, and only the top 50 rewarded child nodes are expanded. The number of iterations does not exceed 100,000. For Gaussian Process, to avoid overfitting problems, 5-fold cross-validation is performed during the updates of the GP world model. Particularly, we use 1-fold for training and 4-fold for validation, which ensures the generalizability of the GP world model over different state-action pairs. Our implementation and all baseline codes are available at https://github.com/zoulixin93/VB_MCTS.

 

Agents SeqDecChallenge
Med. Reward Max Reward Min Reward
Random Policy 167.79 193.24 135.06
SMAB 209.05 386.28 -6.44
CEM 179.30 214.87 120.92
CMA-ES 185.34 246.12 108.18
Q-Learning-GA 247.75 332.40 171.33
Expected-Sarsa 462.76 495.03 423.93
GP-Rmax 233.95 292.99 200.35
GP-MC 475.99 499.60 435.51
VB-MCTS 533.38 552.78 519.61

 

Agents ProveChallenge
Med. Reward Max Reward Min Reward
Random Policy 248.25 464.92 55.24
SMAB 18.02 135.37 -56.86
CEM 229.61 373.83 20.09
CMA-ES 289.03 314.57 92.95
Q-Learning-GA 242.97 325.24 88.70
Expected-Sarsa 190.08 296.16 140.86
GP-Rmax 287.45 371.49 153.98
GP-MC 300.37 447.15 263.96
VB-MCTS 352.17 492.23 259.97

 

Table 1: Performance comparisons between different agents.

4.1 Results

Main Results

In Table 1, we report the median reward, the maximum reward, and the minimal reward over 10 independent repeat runs. The results are quite consistent with our intuition. We have the following observations: (1) For the finite-horizon decision-making problem, treating it as SMAB does not work, and the delayed influence of actions can not be ignored in malaria control. As presented in Table 1, SMAB’s performances have large variance and are even worse than random policy in ProveChallenge. (2) Overall, two model-based methods (GP-MC and VB-MCTS) consistently outperform the model-free methods (CEM, CMA-ES, Q-learning-GA, and Expected-Sarsa) in SeqDecChallenge and ProveChallenge, which indicates that empirically model-based solutions are generally more data-efficient than model-free solutions. From the results in Table 1, performances of model-free methods are defeated by model-based methods with a large margin in SeqDecChallenge, and their performances are almost the same as the random policy in ProveChallenge. (3) The proposed VB-MCTS can outperform all the baselines in SeqDecChallenge and ProveChallenge. Compared with GP-MC and GP-Rmax, the advantage from the efficient MCTS with variance-bonus reward leads to the success on SeqDecChallenge and ProveChallenge.

Figure 4: Malaria policy learning curves in five different agents.

Data Efficiency

This paragraph compares data-efficiency (required trials) of VB-MCTS with other RL methods that learn malaria policies from scratch. In Figure 4(a) and 4(b), we report agents’ performances after collecting every trial episode in SeqDecChallenge and ProveChallenge. The horizontal axis indicates the number of trials. The vertical axis shows the average performance after collecting every episode. Figure 4(a) and 4(b) highlighted that our proposed VB-MCTS approach (brown) requires on average only 8 trials to achieve the best performances in SeqDecChallenge and ProveChallenge, including the first random trial. The results indicate that VB-MCTS can outperform the state-of-the-art method on both data efficiency and performance.

5 Conclusion and Future Work

We proposed a model-based approach employing the Gaussian Process to regress the state transition for data-efficient RL in malaria control. By planning with variance-bonus reward, our method can naturally deal with the dilemma of exploration and exploitation by efficiently MCTS planning. Extensive experiments conducted on the challenging malaria control task have demonstrated the advantage of VB-MCTS over state-of-the-arts both on performance and efficiency. However, the stationary setting of MDP may be unrealistic due to the development of disease control tools and the evolution of the disease. Therefore, data-efficient reinforcement learning under nonstationary settings will be more realistic and more challenging task.

References