Collapsing Bandits and Their Application to Public Health Interventions

07/05/2020 ∙ by Aditya Mate, et al. ∙ University of Virginia Harvard University 11

We propose and study Collpasing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus "collapsing" any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve. The goal is to keep as many arms in the "good" state as possible by planning a limited budget of actions per round. Such Collapsing Bandits are natural models for many healthcare domains in which workers must simultaneously monitor patients and deliver interventions in a way that maximizes the health of their patient cohort. Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable. Our derivation hinges on novel conditions that characterize when the optimal policies may take the form of either "forward" or "reverse" threshold policies. (ii) We exploit the optimality of threshold policies to build fast algorithms for computing the Whittle index, including a closed-form. (iii) We evaluate our algorithm on several data distributions including data from a real-world healthcare task in which a worker must monitor and deliver interventions to maximize their patients' adherence to tuberculosis medication. Our algorithm achieves a 3-order-of-magnitude speedup compared to state-of-the-art RMAB techniques while achieving similar performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Motivation. This paper considers scheduling problems in which a planner must act on out of binary-state processes each round. The planner fully observes the state of the processes on which she acts, then all processes undergo an action-dependent Markovian state transition; the state of the process is unobserved until it is acted upon again, resulting in uncertainty. The planner’s goal is to maximize the number of processes that are in some “good” state over the course of rounds. This class of problems is natural in the context of monitoring tasks which arise in many domains such as sensor/machine maintenance Iannello et al. (2012); Glazebrook et al. (2006); Abbou and Makis (2019); Villar (2016), anti-poaching patrols Qian et al. (2016), and especially healthcare. For example, nurses or community health workers are employed to monitor and improve the adherence of patient cohorts to medications for diseases like diabetes Newman et al. (2018), hypertension Brownstein et al. (2007), tuberculosis Rahedi Ong’ang’o et al. (2014); Chang et al. (2013) and HIV Kenya et al. (2013, 2011). Their goal is to keep patients adherent (i.e., in the “good” state) but a health worker can only intervene on (visit) a limited number of patients each day. Health workers can play a similar role in monitoring and delivering interventions for patient mental health, e.g., in the context of depression Löwe et al. (2004); Mundorf et al. (2018) or Alzheimer’s Disease Lin et al. (2018).

We adopt the solution framework of Restless Multi-Arm Bandits (RMABs), a generalization of Multi-Arm Bandits (MABs) in which a planner may act on out of

arms each round that each follow a Markov Decision Process (MDP). Solving an RMAB is PSPACE-hard in general

Papadimitriou and Tsitsiklis (1999). Therefore, a common approach is to consider the Lagrangian relaxation of the problem in which the

budget constraint is dualized. Solving the relaxed problem gives Lagrange multipliers which act as a greedy index heuristic, known as the Whittle index, for the original problem. The Whittle index approach has been shown to be asymptotically optimal (i.e.,

with fixed ) Weber and Weiss (1990) and performs well empirically Ansell et al. (2003) making it a common solution technique for RMABs.

Critically, using the Whittle index approach requires two key components: (i) a fast method for computing the index and (ii) proving the problem satisfies a condition known as indexability. Without (i) the approach can be prohibitively slow, and without (ii) performance guarantees are sacrificed. Neither (i) nor (ii) are known for general RMABs. Therefore, to capture the scheduling problems addressed in this work, we introduce a new subclass of RMABs, Collapsing Bandits, distinguished by the following feature: when an arm is played, the agent fully observes its state, “collapsing” any uncertainty, but when an arm is passive, no observation is made and uncertainty evolves. We show that this RMAB subclass is more general than previous models and leads to new theoretical results, including conditions under which the problem is indexable and under which optimal policies follow one of two simple threshold types. We use these results to develop algorithms for quickly computing the Whittle index. In experiments, we analyze the algorithms’ performance on (i) data from a real-world healthcare scheduling task in which our approach ties state-of-the-art performance at a fraction the runtime and (ii) various synthetic distributions, some of which the algorithm achieves performance comparable to the state of the art even outside its optimality conditions.

To summarize, our contributions are as follows: (i) We introduce a new subclass of RMABs, Collapsing Bandits, (ii) Derive theoretical conditions for Whittle indexability and for the optimal policy to be threshold-type, and (iii) Develop an efficient solution that achieves a 3-order-of-magnitude speedup compared to more general state-of-the-art RMAB techniques, without sacrificing performance.

2 Restless Multi-Armed Bandits

An RMAB consists of a set of arms, each associated with a two-action MDP Puterman (2014). An MDP consists of a set of states , a set of actions , a state-dependent reward function , and a transition function , where

denotes the probability of transitioning from state

to when action is taken. An MDP policy represents a choice of action to take at each state. We will consider both discounted and average reward criteria. The long-term discounted reward starting from state is defined as where is the discount factor and actions are selected using . To define average reward, let denote the occupancy frequency induced by policy , i.e., the fraction of time spent in each state of the MDP. The average reward of policy be defined as the expected reward computed over the occupancy frequency: .

Each arm in an RMAB is an MDP with the action set . Action () is called the active (passive) action and denotes the arm being pulled (not pulled). The agent can pull at most arms at each time step. The agent’s goal is to maximize either her discounted or average reward across the arms over time. Some RMAB problems need to account for partial observability of states. It is sufficient to let the MDP state be the belief state: the probability of being in each latent state (Kaelbling et al., 1998). While intractable in general due to infinite number of reachable belief states, most partially observable RMABs studied (including our Collapsing Bandits) have polynomially many belief states due to a finite time horizon or other structures.

Related work RMABs have been an attractive framework for studying various stochastic scheduling problems since Whittle indices were introduced Whittle (1988). Because general RMABs are PSPACE-hard Papadimitriou and Tsitsiklis (1999), RMAB studies usually consider restricted classes under which some performance guarantees can be derived. Collapsing Bandits form one such novel class that generalizes some existing results which we note in later sections. Liu and Zhao (2010) develop an efficient Whittle index policy for a 2-state partially observable RMAB subclass in which the state transitions are unaffected by the actions taken and reward is accrued from the active arms only. Akbarzadeh and Mahajan (2019) define a class of bandits with “controlled restarts,” giving indexability results and a method for computing the Whittle index. However, “controlled restarts” define the active action as state independent, a stronger assumption than Collapsing Bandits which allow state-dependent action effects. Glazebrook et al. (2006) give Whittle indexability results for three classes of restless bandits: (1) A machine maintenance regime with deterministic active action effect (we consider stochastic active action effect) (2) A switching regime in which the passive action freezes state transitions (in our setting, states always change regardless of action) (3) A reward depletion/replenishment bandit which deterministically resets to a start state on passive action (we consider stochastic passive action effect). Hsu (2018) and Sombabu et al. (2020) augment the machine maintenance problem from Glazebrook et al. (2006) to include either i.i.d. or Markovian evolving probabilities of an active action having no effect, a limited form of state-dependent action. Meshram et al. (2018) introduce Hidden Markov Bandits which, similar to our approach, consider binary state transitions under partial observability, but do not allow for state dependent rewards on passive arms. In sum, our Collapsing Bandits introduce a new, more general RMAB formulation than special subclasses previously considered. Qian et al. (2016) present a generic approach for any indexable RMAB based on solving the (partially observable) MDPs on arms directly. Because we derive a closed form for the Whittle index, our algorithm is orders of magnitude faster.

3 Collapsing Bandits

We introduce Collapsing Bandits (CoB) as a specially structured RMAB with partial observability. In CoB, each arm has binary latent states , representing bad and good state, respectively. The agent acts during each of finite days . Let

denote the vector of actions taken by the agent on day

. Arm is said to be active at if and passive otherwise. The agent acts on arms per day, i.e., , where because resources are limited. When acting on arm , the true latent state of is fully observed by the agent and thus its uncertainty “collapses” to a realization of the binary latent state. We denote this observation as . States of passive arms are completely unobservable by the agent.

Active arms transition according to the transition matrix and passive arms transition according to . We drop the superscript when there is no ambiguity. Our scheduling problem, like many problems in analogous domains, exhibits the following natural structure: (i) processes are more likely to stay “good” than change from “bad” to “good”; (ii) when acted on, they tend to improve. These natural structures are respectively captured by imposing the following constraints on and for each arm: (i)  and ; (ii)  and . To avoid unnecessary complication through edge cases, all transition probabilities are assumed to be nonzero. The agent receives reward at , where is the latent state of arm at . The agent’s goal is to maximize the long term rewards, either discounted or average, defined in Sec. 2.

(1)

(1)

(2)

(2)

(3)

(3)

(4)

(4)

1

1

1

1

1

1

1

1
Figure 1: Belief-state MDP under the policy of always being passive. There is one chain for each observation with the head marked black. Belief states deterministically transition down the chains.

Belief-State MDP Representation

In limited observability settings, belief-state MDPs have organized chain-like structures, which we will exploit. In particular, the only information that affects our belief of an arm being in state is the number of days since that arm was last pulled and the state observed at that time. Therefore, we can arrange these belief states into two “chains” of length , each for an observation . A sketch of the belief state chains under the passive action is shown in Fig. 1. Let denote the belief state, i.e., the probability that the state is , if the agent received observation when it acted on the process days ago. Note that is also the expected reward associated with that belief state, and let be the set of all belief states.

When the belief-state MDP is allowed to evolve under some policy, the following mechanism arises: first, after an action, the state is observed (uncertainty “collapses”), then one round passes causing the agent’s belief to become , representing the head of the chain determined by . Subsequent passive actions cause the process to transition deterministically down the same chain (though, the transition in the latent state is still stochastic). Then when the process’s arm is active, it transitions to the head of one of the chains with probability equal to the belief that the corresponding observation would be emitted (see Fig. (a)a for an illustration).

The belief associated with a belief state can be calculated in closed form with the given transition probabilities. Formally,

(1)

4 Collapsing Bandits: Threshold Policies and Whittle Indexability

Because of the well-known intractability of solving general RMABs, the widely adopted solution concept in the literature of RMABs is the Whittle index approach; for a comprehensive description, see Whittle (1988). Intuitively, the Whittle index captures the value of acting on an arm in a particular state by finding the minimum subsidy the agent would accept to not act, where the subsidy is some exogenous “donation” of reward. Formally, the modified reward function becomes , where and . Let and be the discounted and average reward criteria for this new subsidy setting, respectively. The former is maximized by the discounted value function (we give a value function for the average reward criterion in Fast Whittle Index Computation):

(2)

where is defined in Eq. 1 and is shorthand for . In a CoB, the Whittle index of a belief state is the smallest s.t. it is equally optimal to be active or passive in the current state. Formally:

(3)

Critically, performance guarantees hold only if the problem satisfies indexability Weber and Weiss (1990); Whittle (1988), a condition which says that for all states, the optimal action cannot switch to active as increases. Let be the set of policies that maximize a given reward criterion under subsidy .

Definition 1 (Indexability).

An arm is indexable if monotonically increases from to the entire state space as increases from to . An RMAB is indexable if every arm is indexable.

The following special type of MDP policy is central to our analysis.

Definition 2 (Threshold Policies).

A policy is a forward (reverse) threshold policy if there exists a threshold such that () if and () otherwise.

Theorem 1.

If for each arm and any subsidy , there exists an optimal policy that is a forward or reverse threshold policy, the Collapsing Bandit is indexable under discounted and average reward criteria.

Proof Sketch.

Using linearity of the value function in subsidy for any fixed policy, we first argue that when forward (reverse) threshold policies are optimal, proving indexability reduces to showing that the threshold monotonically decreases (increases) with . Unfortunately, establishing such a monotonic relationship between the threshold and is a well-known challenging task in the literature that often involves problem-specific reasoning Liu and Zhao (2010). Our proof features a sophisticated induction argument exploiting the finite size of and relies on tools from real analysis for limit arguments.

All formal proofs can be found in the appendix. We remark that Thm. 1 generalizes the result in the seminal work by Liu and Zhao (2010) who proved the indexability for a special class of CoB. In particular, the RMAB in Liu and Zhao (2010) can be viewed as a CoB setting with , i.e., transitions are independent of actions.

Though the Whittle index is known to be challenging to compute in general Whittle (1988), we are able to design an algorithm that computes the Whittle index efficiently assuming the optimality of threshold policies, which we now describe.

Fast Whittle Index Computation

The main algorithmic idea we use is the Markov chain structure that arises from imposing a

forward threshold policy on an MDP. A forward threshold policy can be defined by a tuple of the first belief state in each chain that is less than or equal to some belief threshold . In the two-observation setting we consider, this is a tuple , where is the index of the first belief state in each chain where it is optimal to act (i.e., the belief is less than or equal to ). We now drop the superscript for ease of exposition. See Fig. (a)a for a visualization of the transitions induced by such an example policy. For a forward threshold policy , the occupancy frequencies induced for each state are:

(4)
(5)

These equations are derived from standard Markov chain theory. These occupancy frequencies do not depend on the subsidy. Let be the average reward of policy under subsidy . We decompose the average reward into the contribution of the state reward and the subsidy

(6)

Recall that for any belief state , the Whittle index is the smallest for which the active and passive actions are both optimal. Given forward threshold optimality, this translates to two corresponding threshold policies being equally optimal. Such policies must have adjacent belief states as thresholds, as can be concluded from Lemma 1 in Appendix A. Note that for a belief state the only adjacent threshold policies with active and passive as optimal actions at are and respectively. Thus the subsidy which makes these two policies equal in value must thus be the Whittle Index for , which we obtain by solving: for . We use this idea to construct two fast Whittle index algorithms.

(1)

(1)

(2)

(2)

(3)

(3)

(4)

(4)

(a)
(b)
Figure 2: (a) Visualization of forward threshold policy (,). Black nodes are the head of each chain and grey nodes are the thresholds. (b) Non-increasing belief (NIB) process has non-increasing belief in both chains. A split belief process (SB) has non-increasing belief after being observed in state , but non-decreasing belief after being observed in state .

Sequential index computation algorithm

Alg. 1 precomputes the Whittle index of every belief state for each process and has time complexity per process. It is optimized for settings in which the Whittle index can be precomputed. However, for online learning settings, we give an alternative method in Appendix F that computes the Whittle index on-demand, in a closed form.

Initialize counters to heads of the chains: ,
while  or  do
       Compute such that
       Compute such that
       Set and
       Increment
end while
Algorithm 1 Sequential index computation algorithm

Our algorithm also requires that belief is decreasing in and . Formally, we require:

Definition 3 (Non-increasing belief (NIB) processes).

A process has non-increasing belief if, for any and for any , .

All possible CoB belief trends are shown in Fig. (b)b (full derivation omitted for space). We make this distinction because the computation of the Whittle index in Alg. 1 is guaranteed to be exact for NIB processes that are also forward threshold optimal, though we show empirically that our approach works surprisingly well for most distributions. In the next section, we analyze the possible forms of optimal policies to find conditions under which threshold policies are optimal.

Types of Optimal Policies

Figure 3: Components of in Eq. 2. Since the passive action is convex in , active action is linear in , and value function is a max over these, at most three optimal policy types are possible.

Analyzing Eq. 2 reveals that at most three types of optimal policies exist. This follows directly from the definition of , which is a max over the passive action value function and the active action value function. The former is convex in , a well-known POMDP result Sondik (1978), and the latter is linear in . Thus, as shown in Fig. 3, there are three ways in which the value functions of each action may intersect; this defines three optimal policy forms of forward, reverse and dual threshold types, respectively. Forward and reverse threshold policies are defined in Def. 2; dual threshold policies are active between two separate threshold points and passive elsewhere. Not only do threshold policies greatly reduce the optimization search space, they often admit closed form expressions for the index as demonstrated earlier in this section. We now derive sufficient conditions on the state transition probabilities under which each type of policy is verifiably optimal.

Theorem 2.

Consider a belief-state MDP corresponding to an arm in a Collapsing Bandit. For any subsidy , there is a forward threshold policy that is optimal under the condition:

(7)
Proof Sketch.

Forward threshold optimality requires that if the optimal action at a belief is passive, then it must be so for all . This can be established by requiring that the derivative of the passive action value function is greater than the derivative of the active action value function w.r.t. . The main challenge is to distill this requirement down to measurable quantities so the final condition can be easily verified. We accomplish this by leveraging properties of and using induction to derive both upper and lower bounds on as well as a lower bound on . ∎

Intuitively, the condition requires that the intervention effect on processes in the “bad” state must be large, making small. Note that Liu and Zhao (2010) consider the case where and , which makes Eq. 7 always true. Thus we generalize their result for threshold optimality.

Theorem 3.

Consider a belief-state MDP corresponding to an arm in a Collapsing Bandit. For any subsidy , there is a reverse threshold policy that is optimal under the condition:

(8)

Intuitively, the condition requires small intervention effect on processes in the “bad” state, the opposite of the forward threshold optimal requirement. Note that both Thm. 2 and Thm. 3 also serve as conditions for the average reward case as (a proof based on Dutta’s Theorem Dutta (1991) is given in Appendix D).

Conjecture 1.

Dual threshold policies are never optimal for Collapsing Bandits.

This conjecture is supported by extensive numerical simulations over the random space of state transition probabilities, values of , and values of subsidy ; its proof remains an open problem. Note that this would imply that all Collapsing Bandits are indexable.

5 Experimental Evaluation

We evaluate our algorithm on several domains using both real and synthetic data distributions. We test the following algorithms: Threshold Whittle is the algorithm developed in this paper. Qian et al. (2016), a slow, but precise general method for computing the Whittle index, is our main baseline that we improve upon. Random selects process to act on at random each round. Myopic acts on the  processes that maximize the expected reward at the immediate next time step. Formally, at time , this policy picks the processes with the largest values of . Oracle fully observes all states and uses Qian et al. (2016) to calculate Whittle indices. We measure performance in terms of intervention benefit, where corresponds to the reward of a policy that is always passive and 100% corresponds to Oracle. All results are averaged over 50 independent trials.

5.1 Real Data: Monitoring Tuberculosis Medication Adherence

We first test on tuberculosis medication adherence monitoring data, which contains daily adherence information recorded for each real patient in the system, as obtained from Killian et al. (2019)

. The “good” and “bad” states of the arm (patient) correspond to “Adhering” and “Not Adhering” to medication, respectively. State transition probabilities are estimated from the data. Because this data is noisy and contains only the adherence records and not the intervention (action) information (as the authors state), we perturb the computed average transition matrix by reducing (increasing)

by () to obtain () for the simulation. Reward is measured as the undiscounted sum of patients (arms) in the adherent state over all rounds, where each trial lasts days (matching the length of first-line TB treatment) with patients and a budget of calls per day.

In Fig. (a)a, we plot the runtime in seconds vs the number of patients . Fig. (b)b compares the intervention benefit for patients and of . In the case, the runtimes of a single trial of Qian et al. and Threshold Whittle index policy are seconds and seconds, respectively, while attaining near-identical intervention benefit. Our algorithm is thus orders of magnitude faster than the previous state of the art without sacrificing performance.

We next test Threshold Whittle as the resource level  is varied. Fig. (c)c shows the performance in the , and regimes (). Threshold Whittle outperforms Myopic and Random by a large margin in these low resource settings. We also affirm the robustness of our algorithm to , the perturbation parameter used to approximate real-world and

from the data and present the extensive sensitivity analysis in Appendix G. Finally, in Appendix F we couple our algorithm to a Thompson Sampling-based learning approach and show it performs well in the real-world case where transition probabilities would need to be learned online, supporting the deployability of our work.

(a)
(b)
(c)
Figure 4: (a) Threshold Whittle is several orders of magnitude faster than Qian et al. and scales to thousands of patients without sacrificing performance on realistic data (b). (c) Intervention benefit of Threshold Whittle is far larger than naive baselines and nearly as large as Oracle.

5.2 Synthetic Domains

We test our algorithm on four synthetic domains, that potentially characterize other healthcare or relevant domains, and highlight different phenomena. Specifically, we: (i) identify situations when Myopic fails completely while Whittle remains close to optimal, (ii) analyze the effect of latent state entropy on policy performance, (iii) identify limitations of Threshold Whittle by constructing processes for which Threshold Whittle shows separation from Oracle, and (iv) test robustness of our algorithm outside of the theoretically guaranteed conditions. To facilitate comparison with the real data distribution, we simulate trials for rounds where reward is the undiscounted sum of arms in state over all rounds. We consider the space of transition probabilities satisfying the assumed natural constraints, as outlined in Sec. 3.

Fig. 5a demonstrates a domain characterized by processes that are either self-correcting or non-recoverable. Self-correcting processes have a high probability of transitioning from state to regardless of the action taken, while non-recoverable processes have a low chance of doing so. We show that when the immediate reward is larger for the former than the latter, Myopic can perform even worse than Random. That is because a myopic policy always prefers to act on the self-correcting processes per their larger immediate reward, while Threshold Whittle, capable of long-term planning, looks to avoid spending resources on these processes. In this regime, the best long-term plan is to always act on the non-recoverable processes to keep them from failing. Analytical explanation of this phenomenon is presented in Appendix E. We set the resource level, in our simulation for Fig. 5a. Note that performance of Myopic drops as the fraction of self-correcting processes becomes larger and reaches a minimum at . Beyond this point, Threshold Whittle can no longer completely avoid self-correcting processes and the gap subsequently starts to decrease.

Fig. 5b explores the effect of uncertainty in the latent state on long-term planning. For each point on the -axis, we draw all transition probabilities according to . The entropy of the state of a process is maximum near 0.5 making long term planning most uncertain and as a result, this point shows the biggest gap with Oracle, which can observe all the states in each round. Note that Myopic and Whittle policies perform similarly, as expected for (nearly) stochastically identical arms.

Fig. 5c studies processes that have a large propensity to transition to state when passive and a corresponding low active action impact, but a significantly larger active action impact in state . This makes it attractive to exclusively act on processes in the state. This simulates healthcare domains where a fraction of patients degrade rapidly, but can recover, and indeed respond very well to interventions if already in a good state. To simulate these, we draw transition matrices with and in varying proportions and sample the rest from the real TB adherence data. Because the best plan is to act on processes in state , both Myopic and Whittle act on the processes with the largest belief giving Oracle a significant advantage as it has perfect knowledge of states.

Although we provide theoretical guarantees on our algorithm for forward threshold optimal processes with non-increasing belief, Fig. 5d reveals that Alg. 1 performs well empirically even with these conditions relaxed. Here, we sample processes uniformly at random from the state transition probability space, and use rejection sampling to vary the proportion of threshold optimal processes. Threshold Whittle performs well even when as few as of the processes are forward threshold optimal; we briefly analyze this phenomenon in Appendix H.

Figure 5: (a) Myopic can be trapped into performing even worse than Random while Threshold Whittle remains close to optimal. (b) Long-term planning is least effective when entropy of states is maximum. (c) Myopic and Whittle planning become similar when more processes are prone to failures. (d) Threshold Whittle is surprisingly robust to processes even outside of theoretically guaranteed conditions.

6 Conclusion

We open a new subspace of Restless Bandits, Collapsing Bandits, which applies to a broad range of real-world problems, especially in healthcare delivery. We give new theoretical results that cover a large portion of real-world data as well as an algorithm that runs thousands of times faster than the state of the art without sacrificing performance.

References

  • Abbou and Makis [2019] A. Abbou and V. Makis. Group maintenance: A restless bandits approach. INFORMS Journal on Computing, 31(4):719–731, 2019.
  • Akbarzadeh and Mahajan [2019] N. Akbarzadeh and A. Mahajan. Restless bandits with controlled restarts: Indexability and computation of Whittle index. In IEEE Conference on Decision and Control, 2019.
  • Ansell et al. [2003] P.S. Ansell, K.D. Glazebrook, J. Nino-Mora, and M. O’Keeffe. Whittle’s index policy for a multi-class queueing system with convex holding costs. Mathematical Methods of Operations Research, 57(1):21–39, 2003.
  • Brownstein et al. [2007] J.N. Brownstein, F.M. Chowdhury, S.L. Norris, T. Horsley, L. Jack Jr, X. Zhang, and D. Satterfield. Effectiveness of community health workers in the care of people with hypertension. American journal of preventive medicine, 32(5):435–447, 2007.
  • Chang et al. [2013] A.H. Chang, A. Polesky, and G. Bhatia. House calls by community health workers and public health nurses to improve adherence to isoniazid monotherapy for latent tuberculosis infection: a retrospective study. BMC public health, 13(1):894, 2013.
  • Dutta [1991] P.K. Dutta. What do discounted optima converge to?: A theory of discount rate asymptotics in economic models. Journal of Economic Theory, 55(1):64–94, 1991.
  • Glazebrook et al. [2006] K. D. Glazebrook, D. Ruiz-Hernandez, and C. Kirkbride. Some indexable families of restless bandit problems. Adv. Appl. Probab., 38(3):643–672, 2006.
  • Hsu [2018] Y. Hsu. Age of information: Whittle index for scheduling stochastic arrivals. In IEEE International Symposium on Information Theory, 2018.
  • Iannello et al. [2012] F. Iannello, O. Simeone, and U. Spagnolini. Optimality of myopic scheduling and Whittle indexability for energy harvesting sensors. In 2012 46th Annual Conference on Information Sciences and Systems (CISS), pages 1–6. IEEE, 2012.
  • Kaelbling et al. [1998] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. AIJ, 101(1-2):99–134, 1998.
  • Kenya et al. [2011] S. Kenya, N. Chida, S. Symes, and G. Shor-Posner. Can community health workers improve adherence to highly active antiretroviral therapy in the USA? A review of the literature. HIV medicine, 12(9):525–534, 2011.
  • Kenya et al. [2013] S. Kenya, J. Jones, K. Arheart, E. Kobetz, N. Chida, S. Baer, A. Powell, S. Symes, T. Hunte, A. Monroe, et al. Using community health workers to improve clinical outcomes among people living with HIV: a randomized controlled trial. AIDS and Behavior, 17(9):2927–2934, 2013.
  • Killian et al. [2019] J. A. Killian, B. Wilder, A. Sharma, V. Choudhary, B. Dilkina, and M. Tambe. Learning to prescribe interventions for tuberculosis patients using digital adherence data. In KDD, 2019.
  • Lin et al. [2018] Y. Lin, S. Liu, and S. Huang. Selective sensing of a heterogeneous population of units with dynamic health conditions. IISE Transactions, 50(12):1076–1088, 2018.
  • Liu and Zhao [2010] K. Liu and Q. Zhao. Indexability of restless bandit problems and optimality of Whittle index for dynamic multichannel access. IEEE Transactions on Information Theory, 56(11):5547–5567, 2010.
  • Löwe et al. [2004] B. Löwe, J. Unützer, C.M. Callahan, A.J. Perkins, and K. Kroenke. Monitoring depression treatment outcomes with the patient health questionnaire-9. Medical care, pages 1194–1201, 2004.
  • Meshram et al. [2018] R. Meshram, D. Manjunath, and A. Gopalan. On the whittle index for restless multiarmed hidden Markov bandits. IEEE Transactions on Automatic Control, 63(9):3046–3053, 2018.
  • Mundorf et al. [2018] C. Mundorf, A. Shankar, T. Moran, S. Heller, A. Hassan, E. Harville, and M. Lichtveld. Reducing the risk of postpartum depression in a low-income community through a community health worker intervention. Maternal and child health journal, 22(4):520–528, 2018.
  • Newman et al. [2018] P.M. Newman, M.F. Franke, J. Arrieta, H. Carrasco, P. Elliott, H. Flores, A. Friedman, S. Graham, L. Martinez, L. Palazuelos, et al. Community health workers improve disease control and medication adherence among patients with diabetes and/or hypertension in Chiapas, Mexico: an observational stepped-wedge study. BMJ global health, 3(1):e000566, 2018.
  • Papadimitriou and Tsitsiklis [1999] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of optimal queuing network control. Math. Oper. Res., 24(2):293–305, 1999.
  • Puterman [2014] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, 2014.
  • Qian et al. [2016] Y. Qian, C. Zhang, B. Krishnamachari, and M. Tambe. Restless poachers: Handling exploration-exploitation tradeoffs in security domains. In AAMAS, 2016.
  • Rahedi Ong’ang’o et al. [2014] J. Rahedi Ong’ang’o, C. Mwachari, H. Kipruto, and S. Karanja. The effects on tuberculosis treatment adherence from utilising community health workers: a comparison of selected rural and urban settings in Kenya. PLoS One, 9(2), 2014.
  • Sombabu et al. [2020] B. Sombabu, A. Mate, D. Manjunath, and S. Moharir. Whittle index for AoI-aware scheduling. In 2020 12th International Conference on Communication Systems & Networks (COMSNETS). IEEE, 2020.
  • Sondik [1978] E.J. Sondik. The optimal control of partially observable Markov processes over the infinite horizon: Discounted costs. Operations research, 26(2):282–304, 1978.
  • Villar [2016] S.S. Villar. Indexability and optimal index policies for a class of reinitialising restless bandits. Probability in the engineering and informational sciences, 30(1):1–23, 2016.
  • Weber and Weiss [1990] R. R. Weber and G. Weiss. On an index policy for restless bandits. J. Appl. Probab., 27(3):637–648, 1990.
  • Whittle [1988] P. Whittle. Restless bandits: Activity allocation in a changing world. J. Appl. Probab., 25(A):287–298, 1988.