Action Centered Contextual Bandits

11/09/2017 ∙ by Kristjan Greenewald, et al. ∙ University of Michigan Harvard University 0

Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the theory of sequential decision-making, contextual bandit problems (Tewari & Murphy, 2017) occupy a middle ground between multi-armed bandit problems (Bubeck & Cesa-Bianchi, 2012)

and full-blown reinforcement learning (usually modeled using Markov decision processes along with discounted or average reward optimality criteria

(Sutton & Barto, 1998; Puterman, 2005)). Unlike bandit algorithms, which cannot use any side-information or context, contextual bandit algorithms can learn to map the context into appropriate actions. However, contextual bandits do not consider the impact of actions on the evolution of future contexts. Nevertheless, in many practical domains where the impact of the learner’s action on future contexts is limited, contextual bandit algorithms have shown great promise. Examples include web advertising (Abe & Nakamura, 1999) and news article selection on web portals (Li et al., 2010).

An influential thread within the contextual bandit literature models the expected reward for any action in a given context using a linear mapping from a

-dimensional context vector to a real-valued

reward

. Algorithms using this assumption include LinUCB and Thompson Sampling, for both of which regret bounds have been derived. These analyses often allow the context sequence to be chosen adversarially, but require the linear model, which links rewards to contexts, to be time-invariant. There has been little effort to extend these algorithms and analyses when the data follow an unknown nonlinear or time-varying model.

In this paper, we consider a particular type of non-stationarity and non-linearity that is motivated by problems arising in mobile health (mHealth). Mobile health is a fast developing field that uses mobile and wearable devices for health care delivery. These devices provide us with a real-time stream of dynamically evolving contextual information about the user (location, calendar, weather, physical activity, internet activity, etc.). Contextual bandit algorithms can learn to map this contextual information to a set of available intervention options (e.g., whether or not to send a medication reminder). However, human behavior is hard to model using stationary, linear models. We make a fundamental assumption in this paper that is quite plausible in the mHealth setting. In these settings, there is almost always a “do nothing” action usually called action . The expected reward for this action is the baseline reward and it can change in a very non-stationary, non-linear fashion. However, the treatment effect of a non-zero action, i.e., the incremental change over the baseline reward due to the action, can often be plausibly modeled using standard stationary, linear models.

We show, both theoretically and empirically, that the performance of an appropriately designed action-centered contextual bandit algorithm is agnostic to the high model complexity of the baseline reward. Instead, we get the same level of performance as expected in a stationary, linear model setting. Note that it might be tempting to make the entire model non-linear and non-stationary. However, the sample complexity of learning very general non-stationary, non-linear models is likely to be so high that they will not be useful in mHealth where data is often noisy, missing, or collected only over a few hundred decision points.

We connect our algorithm design and theoretical analysis to the real world of mHealth by using data from a pilot study of HeartSteps, an Android-based walking intervention. HeartSteps encourages walking by sending individuals contextually-tailored suggestions to be active. Such suggestions can be sent up to five times a day–in the morning, at lunchtime, mid-afternoon, at the end of the workday, and in the evening–and each suggestion is tailored to the user’s current context: location, time of day, day of the week, and weather. HeartSteps contains two types of suggestions: suggestions to go for a walk, and suggestions to simply move around in order to disrupt prolonged sitting. While the initial pilot study of HeartSteps micro-randomized the delivery of activity suggestions (Klasnja et al., 2015; Liao et al., 2015), delivery of activity suggestions is an excellent candidate for the use of contextual bandits, as the effect of delivering (vs. not) a suggestion at any given time is likely to be strongly influenced by the user’s current context, including location, time of day, and weather.

This paper’s main contributions can be summarized as follows. We introduce a variant of the standard linear contextual bandit model that allows the baseline reward model to be quite complex while keeping the treatment effect model simple. We then introduce the idea of using action centering in contextual bandits as a way to decouple the estimation of the above two parts of the model. We show that action centering is effective in dealing with time-varying and non-linear behavior in our model, leading to regret bounds that scale as nicely as previous bounds for linear contextual bandits. Finally, we use data gathered in the recently conducted HeartSteps study to validate our model and theory.

1.1 Related Work

Contextual bandits have been the focus of considerable interest in recent years. Chu et al. (2011) and Agrawal & Goyal (2013) have examined UCB and Thompson sampling methods respectively for linear contextual bandits. Works such as Seldin et al. (2011), Dudik et al. (2011) considered contextual bandits with fixed policy classes. Methods for reducing the regret under complex reward functions include the nonparametric approach of May et al. (2012), the “contextual zooming" approach of Slivkins (2014), the kernel-based method of Valko et al. (2013), and the sparse method of Bastani & Bayati (2015). Each of these approaches has regret that scales with the complexity of the overall reward model including the baseline, and requires the reward function to remain constant over time.

2 Model and Problem Setting

Consider a contextual bandit with a baseline (zero) action and non-baseline arms (actions or treatments). At each time , a context vector is observed, an action is chosen, and a reward is observed. The bandit learns a mapping from a state vector depending on and to the expected reward . The state vector is a function of and . This form is used to achieve maximum generality, as it allows for infinite possible actions so long as the reward can be modeled using a -dimensional . In the most unstructured case with actions, we can simply encode the reward with a dimensional where is the indicator function.

For maximum generality, we assume the context vectors are chosen by an adversary on the basis of the history of arms played, states , and rewards received up to time , i.e.,

Consider the model where can be decomposed into a fixed component dependent on action and a time-varying component that does not depend on action:

where due to the indicator function . Note that the optimal action depends in no way on , which merely confounds the observation of regret. We hypothesize that the regret bounds for such a contextual bandit asymptotically depend only on the complexity of , not of . We emphasize that we do not require any assumptions about or bounds on the complexity or smoothness of , allowing to be arbitrarily nonlinear and to change abruptly in time. These conditions create a partially agnostic setting where we have a simple model for the interaction but the baseline cannot be modeled with a simple linear function. In what follows, for simplicity of notation we drop from the argument for , writing with the dependence on understood.

In this paper, we consider the linear model for the reward difference at time :

(1)

where

is zero-mean sub-Gaussian noise with variance

and is a vector of coefficients. The goal of the contextual bandit is to estimate at every time and use the estimate to decide which actions to take under a series of observed contexts. As is common in the literature, we assume that both the baseline and interaction rewards are bounded by a constant for all .

The task of the action-centered contextual bandit is to choose the probabilities

of playing each arm at time so as to maximize expected differential reward

(2)

This task is closely related to obtaining a good estimate of the reward function coefficients .

2.1 Probability-constrained optimal policy

In the mHealth setting, a contextual bandit must choose at each time point whether to deliver to the user a behavior-change intervention, and if so, what type of intervention to deliver. Whether or not an intervention, such as an activity suggestion or a medication reminder, is sent is a critical aspect of the user experience. If a bandit sends too few interventions to a user, it risks the user’s disengaging with the system, and if it sends too many, it risks the user’s becoming overwhelmed or desensitized to the system’s prompts. Furthermore, standard contextual bandits will eventually converge to a policy that maps most states to a near-100% chance of sending or not sending an intervention. Such regularity could not only worsen the user’s experience, but ignores the fact that users have changing routines and cannot be perfectly modeled. We are thus motivated to introduce a constraint on the size of the probabilities of delivering an intervention. We constrain , where is the conditional bandit-chosen probability of delivering an intervention at time . The constants and are not learned by the algorithm, but chosen using domain science, and might vary for different components of the same mHealth system. We constrain , not each , as which intervention is delivered is less critical to the user experience than being prompted with an intervention in the first place. User habituation can be mitigated by implementing the nonzero actions () to correspond to several types or categories of messages, with the exact message sent being randomized from a set of differently worded messages.

Conceptually, we can view the bandit as pulling two arms at each time : the probability of sending a message (constrained to lie in ) and which message to send if one is sent. While these probability constraints are motivated by domain science, these constraints also enable our proposed action-centering algorithm to effectively orthogonalize the baseline and interaction term rewards, achieving sublinear regret in complex scenarios that often occur in mobile health and other applications and for which existing approaches have large regret.

Under this probability constraint, we can now derive the optimal policy with which to compare the bandit. The policy that maximizes the expected reward (2) will play the optimal action

with the highest allowed probability. The remainder of the probability is assigned as follows. If the optimal action is nonzero, the optimal policy will then play the zero action with the remaining probability (which is the minimum allowed probability of playing the zero action). If the optimal action is zero, the optimal policy will play the nonzero action with the highest expected reward

with the remaining probability, i.e. . To summarize, under the constraint , the expected reward maximizing policy plays arm with probability , where

(3)

3 Action-centered contextual bandit

Since the observed reward always contains the sum of the baseline reward and the differential reward we are estimating, and the baseline reward is arbitrarily complex, the main challenge is to isolate the differential reward at each time step. We do this via an action-centering trick, which randomizes the action at each time step, allowing us to construct an estimator whose expectation is proportional to the differential reward , where is the nonzero action chosen by the bandit at time to be randomized against the zero action. For simplicity of notation, we set the probability of the bandit taking nonzero action to be equal to .

3.1 Centering the actions - an unbiased estimate

To determine a policy, the bandit must learn the coefficients of the model for the differential reward as a function of . If the bandit had access at each time to the differential reward , we could estimate using a penalized least-squares approach by minimizing

over , where is the reward under action at time (Agrawal & Goyal, 2013). This corresponds to the Bayesian estimator when the reward is Gaussian. Although we have only access to , not , observe that given , the bandit randomizes to with probability and otherwise. Thus

(4)

Thus , which only uses the observed

, is proportional to an unbiased estimator of

. Recalling that are both known since they are chosen by the bandit at time , we create the estimate of the differential reward between and action 0 at time as

The corresponding penalized weighted least-squares estimator for using is the minimizer of

(5)

where for simplicity of presentation we have used unit penalization , and

The weighted least-squares weights are , since

and the standard deviation of

given is of order . The minimizer of (5) is .

3.2 Action-Centered Thompson Sampling

As the Thompson sampling approach generates probabilities of taking an action, rather than selecting an action, Thompson sampling is particularly suited to our regression approach. We follow the basic framework of the contextual Thompson sampling approach presented by Agrawal & Goyal (2013), extending and modifying it to incorporate our action-centered estimator and probability constraints.

The critical step in Thompson sampling is randomizing the model coefficients according to the prior for at time . A is generated, and the action chosen to maximize . The probability that this procedure selects any action is determined by the distribution of ; however, it may select action 0 with a probability not in the required range . We thus introduce a two-step hierarchical procedure. After generating the random , we instead choose the nonzero maximizing the expected reward

1:  Set , , , choose .
2:  for  do
3:     Observe current context and form for each .
4:     Randomly generate .
5:     Let
6:     Compute probability of taking a nonzero action according to (6).
7:     Play action with probability , else play .
8:     Observe reward and update
9:  end for
Algorithm 1 Action-Centered Thompson Sampling

Then we randomly determine whether to take the nonzero action, choosing with probability

(6)

and otherwise, where . is the probability that the expected relative reward of action is higher than zero for . This probability is easily computed using the normal CDF. Finally the bandit updates , and computes an updated . Our action-centered Thompson sampling algorithm is summarized in Algorithm 1.

4 Regret analysis

Classically, the regret of a bandit is defined as the difference between the reward achieved by taking the optimal actions , and the expected reward received by playing the arm chosen by the bandit

(7)

where the expectation is taken conditionally on . For simplicity, let be the probability that the optimal policy takes a nonzero action, and recall that is the probability the bandit takes a nonzero action. The probability constraint implies that the optimal policy (3) plays the optimal arm with a probability bounded away from 0 and 1, hence definition (7) is no longer meaningful. We can instead create a regret that is the difference in expected rewards conditioned on , but not on the randomized action :

(8)

where we have recalled that given , the bandit plays action with probability and plays the with differential reward 0 otherwise. The action-centered contextual bandit attempts to minimize the cumulative regret over horizon .

4.1 Regret bound for Action-Centered Thompson Sampling

In the following theorem we show that with high probability, the probability-constrained Thompson sampler has low regret relative to the optimal probability-constrained policy.

Theorem 1.

Consider the action-centered contextual bandit problem, where is potentially time varying, and at time given is chosen by an adversary. Under this regime, the total regret at time for the action-centered Thompson sampling contextual bandit (Algorithm 1) satisfies

with probability at least , for any , . The constant is in the proof.

Observe that this regret bound does not depend on the number of actions , is sublinear in , and scales only with the complexity of the interaction term, not the complexity of the baseline reward . Furthermore, can be chosen giving a regret of order .

This bound is of the same order as the baseline Thompson sampling contextual bandit in the adversarial setting when the baseline is identically zero (Agrawal & Goyal, 2013). When the baseline can be modeled with features where , our method achieves regret whereas the standard Thompson sampling approach has regret. Furthermore, when the baseline reward is time-varying, the worst case regret of the standard Thompson sampling approach is , while the regret of our method remains .

4.2 Proof of Theorem 1 - Decomposition of the regret

We will first bound the regret (8) at time .

(9)
(10)

where the inequality holds since and by definition. Then

Observe that we have decomposed the regret into a term that depends on the choice of the randomization between the zero and nonzero action, and a term that depends only on the choice of the potential nonzero action prior to the randomization. We bound using concentration inequalities, and bound using arguments paralleling those for standard Thompson sampling.

Lemma 1.

Suppose that the conditions of Theorem 1 apply. Then with probability at least , for some constant given in the proof.

Lemma 2.

Suppose that the conditions of Theorem 1 apply. Then term can be bounded as

where the inequality holds with probability at least .

The proofs are contained in Sections D and E in the supplement respectively. In the derivation, the “pseudo-actions” that Algorithm 1 chooses prior to the baseline-nonzero randomization correspond to the actions in the standard contextual bandit setting. Note that involves only , not , hence it is not surprising that the bound is smaller than that for . Combining Lemmas 1 and 2 via the union bound gives Theorem 1.

5 Results

5.1 Simulated data

We first conduct experiments with simulated data, using possible nonzero actions. In each experiment, we choose a true reward generative model inspired by data from the HeartSteps study (for details see Section A.1 in the supplement), and generate two length sequences of state vectors and , where the are iid Gaussian and is formed by stacking columns for . We consider both nonlinear and nonstationary baselines, while keeping the treatment effect models the same. The bandit under evaluation iterates through the time points, at each choosing an action and receiving a reward generated according to the chosen model. We set .

At each time step, the reward under the optimal policy is calculated and compared to the reward received by the bandit to form the regret . We can then plot the cumulative regret

(a) Median cumulative regret
(b)

Median with 1st and 3rd quartiles (dashed)

Figure 1: Nonlinear baseline reward , in scenario with 2 nonzero actions and reward function based on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach, compared to baseline contextual bandit, median computed over 100 random trials.
(a) Median cumulative regret
(b) Median with 1st and 3rd quartiles (dashed)
Figure 2: Nonstationary baseline reward , in scenario with 2 nonzero actions and reward function based on collected HeartSteps data. Cumulative regret shown for proposed Action-Centered approach, compared to baseline contextual bandit, median computed over 100 random trials.

In the first experiment, the baseline reward is nonlinear. Specifically, we generate rewards using where and is a fixed vector listed in supplement section A.1. This simulates the quite likely scenario that for a given individual the baseline reward is higher for small absolute deviations from the mean of the first context feature, i.e. rewards are higher when the feature at the decision point is “near average”, with reward decreasing for abnormally high or low values. We run the benchmark Thompson sampling algorithm (Agrawal & Goyal, 2013) and our proposed action-centered Thompson sampling algorithm, computing the cumulative regrets and taking the median over 500 random trials. The results are shown in Figure 1, demonstrating linear growth of the benchmark Thompson sampling algorithm and significantly lower, sublinear regret for our proposed method.

We then consider a scenario with the baseline reward function changing in time. We generate rewards as where , is a fixed vector as above, and , are generated as smoothly varying Gaussian processes (supplement Section A.1). The cumulative regret is shown in Figure 2, again demonstrating linear regret for the baseline approach and significantly lower sublinear regret for our proposed action-centering algorithm as expected.

5.2 HeartSteps study data

The HeartSteps study collected the sensor and weather-based features shown in Figure 3 at 5 decision points per day for each study participant. If the participant was available at a decision point, a message was sent with constant probability 0.6. The sent message could be one of several activity or anti-sedentary messages chosen by the system. The reward for that message was defined to be where is the step count of the participant in the 30 minutes following the suggestion. As noted in the introduction, the baseline reward, i.e. the step count of a subject when no message is sent, does not only depend on the state in a complex way but is likely dependent on a large number of unobserved variables. Because of these unobserved variables, the mapping from the observed state to the reward is believed to be strongly time-varying. Both these characteristics (complex, time-varying baseline reward function) suggest the use of the action-centering approach.

We run our contextual bandit on the HeartSteps data, considering the binary action of whether or not to send a message at a given decision point based on the features listed in Figure 3 in the supplement. Each user is considered independently, for maximum personalization and independence of results. As above we set .

We perform offline evaluation of the bandit using the method of Li et al. (2011). Li et al. (2011) uses the sequence of states, actions, and rewards in the data to form an near-unbiased estimate of the average expected reward achieved by each algorithm, averaging over all users. We used a total of 33797 time points to create the reward estimates. The resulting estimates for the improvement in average reward over the baseline randomization, averaged over 100 random seeds of the bandit algorithm, are shown in Figure 4 of the supplement with the proposed action-centering approach achieving the highest reward. Since the reward is logarithmic in the number of steps, the results imply that the benchmark Thompson sampling approach achieves an average 1.6% increase in step counts over the non-adaptive baseline, while our proposed method achieves an increase of 3.9%.

6 Conclusion

Motivated by emerging challenges in adaptive decision making in mobile health, in this paper we proposed the action-centered Thompson sampling contextual bandit, exploiting the randomness of the Thompson sampler and an action-centering approach to orthogonalize out the baseline reward. We proved that our approach enjoys low regret bounds that scale only with the complexity of the interaction term, allowing the baseline reward to be arbitrarily complex and time-varying.

Acknowledgments

This work was supported in part by grants R01 AA023187, P50 DA039838, U54EB020404, R01 HL125440 NHLBI/NIA, NSF CAREER IIS-1452099, and a Sloan Research Fellowship.

References

  • Abbasi-Yadkori et al. (2011) Abbasi-Yadkori, Yasin, Pál, Dávid, and Szepesvári, Csaba. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems, pp. 2312–2320, 2011.
  • Abe & Nakamura (1999) Abe, Naoki and Nakamura, Atsuyoshi. Learning to optimally schedule internet banner advertisements. In

    Proceedings of the Sixteenth International Conference on Machine Learning

    , pp. 12–21. Morgan Kaufmann Publishers Inc., 1999.
  • Agrawal & Goyal (2013) Agrawal, Shipra and Goyal, Navin. Thompson sampling for contextual bandits with linear payoffs. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 127–135, 2013.
  • Auer et al. (2002) Auer, Peter, Cesa-Bianchi, Nicolo, Freund, Yoav, and Schapire, Robert E. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
  • Bastani & Bayati (2015) Bastani, Hamsa and Bayati, Mohsen. Online decision-making with high-dimensional covariates. Available at SSRN 2661896, 2015.
  • Bubeck & Cesa-Bianchi (2012) Bubeck, Sébastien and Cesa-Bianchi, Nicolo. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012.
  • Chu et al. (2011) Chu, Wei, Li, Lihong, Reyzin, Lev, and Schapire, Robert E. Contextual bandits with linear payoff functions. In

    International Conference on Artificial Intelligence and Statistics

    , pp. 208–214, 2011.
  • Dudik et al. (2011) Dudik, Miroslav, Hsu, Daniel, Kale, Satyen, Karampatziakis, Nikos, Langford, John, Reyzin, Lev, and Zhang, Tong. Efficient optimal learning for contextual bandits. In Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence, pp. 169–178. AUAI Press, 2011.
  • Klasnja et al. (2015) Klasnja, Predrag, Hekler, Eric B., Shiffman, Saul, Boruvka, Audrey, Almirall, Daniel, Tewari, Ambuj, and Murphy, Susan A. Microrandomized trials: An experimental design for developing just-in-time adaptive interventions. Health Psychology, 34(Suppl):1220–1228, Dec 2015.
  • Li et al. (2010) Li, Lihong, Chu, Wei, Langford, John, and Schapire, Robert E. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, pp. 661–670. ACM, 2010.
  • Li et al. (2011) Li, Lihong, Chu, Wei, Langford, John, and Wang, Xuanhui. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of the fourth ACM international conference on Web search and data mining, pp. 297–306. ACM, 2011.
  • Liao et al. (2015) Liao, Peng, Klasnja, Predrag, Tewari, Ambuj, and Murphy, Susan A. Sample size calculations for micro-randomized trials in mhealth. Statistics in medicine, 2015.
  • May et al. (2012) May, Benedict C., Korda, Nathan, Lee, Anthony, and Leslie, David S. Optimistic Bayesian sampling in contextual-bandit problems. The Journal of Machine Learning Research, 13(1):2069–2106, 2012.
  • Puterman (2005) Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2005.
  • Seldin et al. (2011) Seldin, Yevgeny, Auer, Peter, Shawe-Taylor, John S., Ortner, Ronald, and Laviolette, François. PAC-Bayesian analysis of contextual bandits. In Advances in Neural Information Processing Systems, pp. 1683–1691, 2011.
  • Slivkins (2014) Slivkins, Aleksandrs. Contextual bandits with similarity information. The Journal of Machine Learning Research, 15(1):2533–2568, 2014.
  • Sutton & Barto (1998) Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction. MIT Press, 1998.
  • Tewari & Murphy (2017) Tewari, Ambuj and Murphy, Susan A. From ads to interventions: Contextual bandits in mobile health. In Rehg, Jim, Murphy, Susan A., and Kumar, Santosh (eds.), Mobile Health: Sensors, Analytic Methods, and Applications. Springer, 2017.
  • Valko et al. (2013) Valko, Michal, Korda, Nathan, Munos, Rémi, Flaounas, Ilias, and Cristianini, Nello. Finite-time analysis of kernelised contextual bandits. In Uncertainty in Artificial Intelligence, pp. 654, 2013.

Appendix A HeartSteps feature list

Figure 3 shows the features available to the bandit in the HeartSteps study dataset, and Figure 4 shows the estimated average regret results with errorbars.

Feature Description Purpose Interaction Baseline Model
Number of messages sent Total number of messages sent to user in prior week Modeling habituation to intervention Y Y
Location indicator 1 1 if not at home or work, 0 o.w. Location relevant to availability to walk Y Y
Location indicator 2 1 if at work, 0 o.w. Y Y
Step count variability Historical standard deviation of step counts in 60 minute window surrounding decision point, taken over prior 7 days Responsiveness in different times of day Y Y
Steps in prior 30 minutes Step count in 30 minutes prior to decision point Measure of recent activity Y
Square root of steps yesterday Square root of the total step count yesterday Recent commitment/ engagement Y
Outdoor Temperature Degrees Celsius Cold weather potentially less appealing Y
Figure 3: List of features available to the bandit in the HeartSteps experiment. The features available to model the action interaction (effect of sending an anti-sedentary message) and to model the baseline (reward under no action) are denoted via a “Y” in the corresponding column.
Figure 4: Unbiased estimates of the average reward received by the benchmark Thompson sampling contextual bandit and the proposed action-centered Thompson sampling contextual bandit, relative to the reward received under the pre-specified HeartSteps randomization policy. Also shown are one standard deviation error bars for the computed estimates. The superior performance of the action-centering approach is indicative of its robustness to the high complexity of the baseline subject behavior.

a.1 Simulation model

Figure 5 shows the coefficients

used in the main text simulations. The coefficients shown in the figure associated with the first action are obtained via a linear regression analysis of the binary action (sending or not sending a message) HeartSteps intervention data, and the coefficients for the second action are a simple modification of those.

Feature Action 1 coef. Action 2 coef.
Number of messages sent .116 .116
Location indicator 1 -.275 .275
Location indicator 2 -.233 -.233
Step count variability .0425 .0425
Figure 5: Effect coefficients, based on HeartSteps data, used for simulation reward model.

For the time varying simulation, Gaussian processes were used to generate the reward coefficient sequence and the state sequence . We used Gaussian processes since if

is IID, then the baseline reward becomes an IID random variable, making the baseline reward not time varying.

We used the Gaussian process

where , , and . The state sequence was generated in the same manner.

Appendix B Definitions

In order to proceed with the proof of Theorem 1, we make the following definitions.

Definition 1.

Define a filtration as the union of the history and current context.

Definition 2.

Let

for all .

Definition 3.

Define , , and .

We divide the arms into saturated and unsaturated actions.

Definition 4 (Saturated vs. unsaturated actions).

Any arm for which is called a saturated arm. If an arm is not saturated, it is called unsaturated. Let be the subset of saturated arms at time .

Observe that the optimal arm is unsaturated by definition.

We can now state the required concentration events and present bounds on the probability they occur.

b.1 Concentration events

Definition 5.

Let be the event that for all

Similarly, let be the event that for all

and be the corresponding event that for all

We can bound the probabilities of the events , , and in the following lemmas. Observe that by definition .

Lemma 3 (Agrawal & Goyal (2013)).

For all , and possible filtrations , .

For we have

Lemma 4.

For all , , .

The proof is given in Section G.

b.2 Supermartingales

Definition 6 (Supermartingale).

A sequence of random variables is called a supermartingale corresponding to a filtration if, for all , is -measurable, and

for all .

Lemma 5 (Azuma-Hoeffding inequality).

If for all a supermartingale corresponding to filtration satisfies for some constants , then for any

Appendix C Preliminary results

c.1 Lemma 7: Probability of choosing a saturated action

Lemma 6 (Agrawal & Goyal (2013) Lemma 2).

For any filtration such that is true,

We can now prove the following.

Lemma 7.

For any filtration such that is true,

where .

Proof.

Recall that is the action with the largest value of . Hence, if is larger than for all , then is one of the unsaturated actions. Hence

(11)

We know that by definition all saturated arms have . Given an such that holds, we have that either is false or for all

implying

where we have used the definitions of , , and the last inequality follows from Lemma 6 and Lemma 4. Substituting into (11) gives

and

c.2 Lemma 9 - Bound on

Lemma 8.

For , we have that

where is a contant.

Proof.

We apply the following lemma from Auer et al. (2002) and Chu et al. (2011).

Lemma 9.

Let , where is a sequence of vectors. Then, defining , we have

To apply this to , let . Then