1 Introduction
In this paper we consider the problem of sequential sampling from independent statistical populations with unknown distributions. The objective is to maximize the expected outcome per period achieved over infinite horizon, under a constraint that the expected sampling cost per period does not exceed an upper bound. The introduction of a sampling cost introduces a new dimension in the standard tradeoff between experimentation and profit maximization faced in problems of control under incomplete information. The sampling cost may prohibit using populations with high mean outcomes because their sampling cost may be too high. Instead, the decision maker must identify the subset of populations with the best combination of outcome versus cost and allocate the sampling effort among them in an optimal manner.
From the mathematical point of view, this class of problems incorporates statistical methodologies into mathematical programming problems. Indeed, under complete information, the problem of effort allocation under cost constraints is typically formulated in terms of linear or nonlinear programming. However when some of the problem parameters are not known in advance but must be estimated by experimentation, the decision maker must design adaptive learning and control policies that ensure learning about the parameters while at the same time ensuring that the profit sacrificed for the learning process is as low as possible.
The model in this paper falls in the general area of multi-armed bandit problems, which was initiated by Robbins (1952), who proposed a simple adaptive policy for sequentially sampling from two unknown populations in order to maximize the expected outcome per unit time infinite horizon. Lai and Robbins (1985) generalize the results by constructing asymptotically efficient adaptive policies with optimal convergence rate of the average outcome to the optimal value under complete information and show that the finite horizon loss due to incomplete information increases with logarithmic rate. Katehakis and Robbins (1995)
prove that simpler index-based efficient policies exist in the case of normal distributions with unknown means, while
Burnetas and Katehakis (1996) extend the results on efficient policies in the nonparametric case of discrete distributions with known support.In a finite horizon Kulkarni and Lugosi (2000) develop a minimax version of the Lai and Robbins (1985) results for two populations, while Auer et al. (2002) construct policies which also achieve logarithmic regret uniformly over time, rather than only asymptotically.
In all works mentioned above there is no side constraint in sampling. Problems with adaptive sampling and side constraints are scarce in the literature. Wang (1991)
considers a multi-armed bandit model with constraints and adopts a Bayesian formulation and the Gittins-index approach. The paper proposes several heuristic policies.
Pezeshk and Gittins (1999) also consider the problem of estimating the distribution of a single population with sampling cost under the assumption that the number of users who will benefit from the depends on the outcome of the estimation. Finally, Madani et al. (2004) present computational complexity analysis for a version of the multi-armed bandit problem with Bernoulli outcomes and Beta priors, where there is a total budget for experimentation, which must be allocated to sampling from the different populations.Another approach, which is closer to the one we adopt here is to consider the family of stochastic approximations and reinforcement learning algorithms. The general idea is to select the sampled population following a randomized policy with randomization probabilities that are adaptively modified after observing the outcome in each period. The adaptive scheme is based on the stochastic approximation algorithm. Algorithm of this type are analyzed in
Poznyak et al. (2000) for the more general case where the population outcomes have Markovian dynamics instead of being i.i.d..The contribution of this paper is the construction of a family of policies for which the average outcome per period converges to the optimal value under complete information for all distributions of individual populations with finite means. In this sense, it generalizes the results of Robbins (1952) by including a sampling cost constraint. The paper organized as follows. In Section , we describe the model in the complete and incomplete information framework. In Section , we construct a class of adaptive sampling policies and prove that it is consistent. In Section , we explore the rate of convergence of the proposed policies using simulation. Section 5 concludes.
2 Model description
Consider the following problem in adaptive sampling. There are independent statistical populations, . Successive samples from population
constitute a sequence of i.i.d. random variables
following a univariate distribution with density with respect to a nondegenerate measure. Then the stochastic model is uniquely determined by the vector
of individual pdf’s. Given f let be the vector of expected values, i.e. . The form of is not known. In each period the experimenter must select a population to obtain a single sample from. Sampling from population incurs cost per sample and without loss of generality we assume , but not all equal. The objective is to maximize the expected average reward per period subject to the constraint that the expected average sampling cost per period over infinite horizon does not exceed a given upper bound . Without loss of generality we assume . Indeed if then the problem is infeasible. On the other hand if then the cost constraint is redundant. Let . Then and .2.1 Complete information framework
We first analyze the complete information problem. If all
are known, then the problem can be modeled via linear programming. Consider a randomized sampling policy which at each period selects population
with probability , for . To find a policy that maximizes the expected reward, we can formulate the following linear program in standard formNote that depends on only through the vector , i.e. is the same for all collections of pdf with the same . Therefore in the remainder we will denote as a function of the unknown mean vector .
In the analysis we will also use the dual linear program (DLP) of (2.1),
with two variables and which correspond to the first and second constraints of (2.1), respectively.
The basic matrix corresponding to a Basic Feasible Solution (BFS) of problem (2.1) may take one of two forms:
In the first case, the basic variables are , for two populations , with , and the basic matrix is
The BFS is then
with
The solution is nondegenerate when and degenerate when or . In the latter case, it corresponds to sampling from a single population or , respectively:
with
The second case of a BFS corresponds to basic variables for a population with . The basic matrix is
In this case the BFS corresponds to sampling from population only
with
The solution is nondegenerate if , otherwise it is degenerate.
From the above it follows that a BFS is degenerate if for some with . Any basic matrix that includes as a basic variable corresponds to this BFS.
For a BFS let
Then, either for some with , or for some . There is a one to one correspondence between basic feasible solutions and sets of this form. We use to denote the set of BFS, or equivalently
Since the feasible region of (2.1) is bounded, is finite.
For a basic matrix , let denote the dual vector corresponding to , i.e., , where , or , depending on the form of .
Regarding optimality, a BFS is optimal if and only if for at least one corresponding basic matrix the reduced costs (dual slacks) are all nonnegative:
A basic matrix satisfying this condition is optimal. Note that if an optimal BFS is degenerate, then not all basic matrices corresponding to it are necessarily optimal.
It is easy to show that the reduced costs can be expressed as a linear combinations , where is an appropriately defined vector that does not depend on .
We finally define the set with optimal solutions of (2.1) for a ,
An optimal solution of (2.1) specifies randomization probabilities that guarantee maximization of the average reward subject to the cost constraint. Note that an alternative way to implement the optimal solution, without randomization, is to sample periodically from all populations so that the proportion of samples from each population is equal to . This characterization of a policy is valid if randomization probabilities are rational.
2.2 Incomplete information framework
In this paper we assume that the population distributions are unknown. Specifically we make the following assumption.
Assumption 1
The outcome distributions are independent, and the expected values , .
Let be the set of all which satisfy A.1. Class is the effective parameter set in the incomplete information framework. Under incomplete information, a policy as that in Section 2.1, which depends on the actual value of , is not admissible. Instead we restrict our attention to the class of adaptive policies, which depend only on the past observations of selections and outcomes.
Specifically, let , denote the population selected and the observed outcome at period . Let be the history of actions and observations available at period t.
An adaptive policy is defined as a sequence
of history dependent probability distributions on
, such thatGiven the history , let denote the number of times population has been sampled during the first n periods
Let be the reward up to period :
and be the total cost up to period :
These quantities can be used to define the desirable properties of an adaptive policy, namely feasibility and consistency.
Definition 1
A policy is called feasible if
(2) |
Definition 2
A policy is called consistent if it is feasible and
Let and denote the class of feasible and consistent policies, respectively. The above properties are reasonable requirements for an adaptive policy. The first ensures that the long-run average sampling cost does not exceed the budget. The second definition means that the long-run average outcome per period achieved by converges with probability one to the optimal expected value that could be achieved under full information, for all possible population distributions satisfying A.1.
Note that consistency as defined in Definition is equivalent to the notion of strong consistency of an estimator function.
3 Construction of a consistent policy
A key question in the incomplete information framework is whether feasible and, more importantly, consistent policies exist and how they can be constructed.
It is very easy to show that feasible policies exist, since the sampling costs are known. Indeed any randomized policy, such as those defined in Section 2.1, with randomization probabilities satisfying the constraints of LP (2.1) is feasible for any distribution . Thus, .
On the other hand, the construction of consistent policies is not trivial. A consistent policy must accomplish three goals: First to be feasible, second to be able to estimate the mean outcomes from all populations, and third, in the long-run, to sample from the nonoptimal populations rarely enough so as not to affect the average profit.
In this section we establish the existence of a class of consistent policies. The construction follows the main idea of Robbins (1952), based on sparse sequences, which is adapted to ensure feasibility.
We start with some definitions. For any population , let , be a strongly consistent estimator of , i.e. a.s.-. Such estimators exist; for example from Assumption 1, the sample mean is strongly consistent.
For any , let be the vector estimates of based on the history up to period . Also let denote the optimal value of the linear program in (2.1) where the estimates are used in place of the unknown mean vector in the objective. will be referred to as the Certainty-Equivalence LP. Note that is the set of optimal BFS of .
The solution of corresponds to a sampling policy determined by an optimal vector , so that .
We next define a class of sampling policies, which we will show to be consistent. Consider nonoverlapping sparse sequences of positive integers,
such that
(3) |
Now define policy which in period selects any population with probability equal to
where is any optimal BFS of the certainty-equivalence LP .
The main idea in is that at periods which coincide with the terms of sequence , population is selected regardless of the history. These instances are referred to as forced selections of population . The purpose of forced selections is to ensure that all populations are sampled infinitely often, so that the estimate vector converges to the true mean as .
On the other hand, because sequences are sparse, the fraction of forced selections periods converges to zero for all , so that sampling from the nonoptimal populations does not affect the average outcome in the long-run.
In the remaining time periods, which do not coincide with a sparse sequence term, the sampling policy is that suggested by the certainty equivalence LP, i.e., the experimenter in general randomizes between those populations, which, based on the observed history, appear to be optimal.
In the next theorem we prove the main result of the paper, namely that . The proof adapts the main idea of Robbins (1952) to the problem with the cost constraint.
Theorem 1
Policy is consistent.
Before we show Theorem , we prove an intermediate result which shows that if in some period the certainty equivalence LP yields an optimal solution that is non-optimal under the true distribution , then the estimate of at least one population mean must be sufficiently different from the true value. We use the supremum norm .
Lemma 1
For any there exists such that for any if and for some , then .
Proof. Since , we have that for any basic matrix corresponding to BFS there exists at least one such that . Therefore,
(4) |
In addition, since , there exists a basic matrix corresponding to , such for any it is true that , thus,
(5) |
because from the property it follows that .
Now let
where the minimization over is taken over all basic matrices corresponding to BFS .
Then .
Proof of Theorem 1.
For let
denote the number of periods in where a forced selection from population is performed.
Also let,
Since these include all possibilities of selection in a period, it is true that
Now let denote the sum of outcomes in periods where true optimal BFS are used:
To show the theorem we will prove that
(6) | |||
(7) | |||
(8) |
First, (6) holds since are sparse for all . To show (7), in no forced selection periods, in order to sample from a BFS it is necessary but not sufficient that , thus
For any and , it follows from Lemma that
Therefore, for
thus,
because , a.s., since is strongly consistent estimator, thus (7) holds.
Now to show (8) we rewrite as
From this expression it follows that
where .
Since , we have
To show (8) we will prove that
Random variable is increasing in and , thus either or for some . We define the following events:
Now let and . Also let
Then .
Now,
since in this case is bounded for any finite .
Therefore, , thus
Finally,
Thus the proof of the theorem is complete.
4 Rate of Convergence - Simulations
From the results of the previous section it follows that there exists significant flexibility in the construction of a consistent sampling policy. Indeed, any collection of sparse sequences of forced selection periods satisfying (3) guarantees that Theorem 1 holds.
In this section we refine the notion of consistency and examine how the rate of convergence of the average outcome to the optimal value is affected by different types of sparse sequences. Furthermore, since the sensitivity analysis will be performed using simulation, it is more appropriate to use the expected value of the deviation as the convergence criterion. We thus consider the expected difference of the average outcome under a consistent policy from the optimal value:
Note that the almost sure convergence of to proved in Theorem 1 does not imply convergence in expectation, unless further technical assumptions on the unknown distributions are made. For the purpose of our simulation study, we will further assume that the outcomes of any population are absolutely bounded with probability one, i.e., , for some . Under this assumption it is easy to show that Theorem 1 implies
(9) |
for any consistent policy and any vector .
To explore the rate of convergence in (9), we performed a simulation study, for a problem with populations. The outcomes of population
follow binomial distribution with parameters
, where . The vector of expected values is thus . The cost vector is and . Under this set of values the optimal policy under incomplete information is , and , i.e., it is optimal to randomize between populations 2 and 3, the expected sampling cost per period is equal to 5 and the expected average reward per period is equal to 3.For the above problem we simulated the performance of a consistent policy for sparse sequences of power function form:
where are appropriately defined constants which ensure that the sequences are not overlapping, and the exponent parameter is common for all populations. We compared the convergence rate in (9) for five values of : (1.2, 1.5, 2, 3, 5). For each value of the corresponding policy was simulated for 1000 scenarios of length periods each, to obtain an estimate of the expected average outcome per period . The results of the simulations are presented in Figure 1.

We observe in Figure 1 that the convergence is slower both for small and large values of and faster for intermediate values. Especially for the difference is relatively large even after 10000 periods. This is explained as follows. For small values of the forced selections are more frequent. Although this has the desirable effect that the mean estimates for all populations become accurate very soon, it also means that non-optimal populations are also sampled frequently because of forced selections. As a result the average outcome may deviate from the true optimal value for a longer time period. On the other hand, for large values of the sequences all become very sparse and thus the forced selections are rare. In this case it takes a longer time for the estimates to converge, and the linear programming problems may produce non-optimal solutions for long intervals.
It follows from the above discussion that intermediate values of are generally preferable, since they offer a better balance of the two effects, fast estimation of all mean values and avoiding non optimal populations. This is also evident in the graph, where the value seems to be the best in terms of speed of convergence.
To address the question of accuracy of the comparison of convergence rates based on simulation, Figure 2 presents a 95% confidence region for the average outcome curve corresponding to
, based on 1000 simulated scenarios. The confidence region is generally very narrow (note that the vertical axes have different scale in the two figures), thus the estimate of the expected average outcome is quite accurate. This is also the case for the other curves, therefore the comparison of convergence rates is valid. Furthermore, the length of the confidence interval becomes smaller for larger time periods since, as expected, the convergence to the true value is better for longer scenario durations.

.
Another issue arising from Figure 1 is the following. For the average outcome converges very slowly to , but remains above it for the entire scenario duration. Thus it could be argued that, although the convergence is not good, this policy is actually preferable, because it yields higher average outcomes than the other policies. It also seems to contradict the fact that is the maximum average outcome under complete information, since there is a sampling policy that even under incomplete information performs better.
The reason for this discrepancy is related to the form of the cost constraint (2). The constraint requires the infinite-horizon expected cost per period not to exceed . This does not preclude the possibility that one or more populations with large sampling costs and large expected outcomes could be used for arbitrarily long intervals before switching to a constrained-optimal policy for the remaining infinite horizon. Such policies might achieve average rewards higher than for long intervals, however this is achieved by “borrowing”, i.e., violating the cost constraint, also for long time periods. Since (2) is only required to hold in the limit, this behavior of a policy is allowed.
Although the consistent policies in Section 3 are not designed specifically to take advantage of this observation, they are neither designed to avoid it. Therefore, it is possible, as it happens here for , that a consistent policy may achieve higher than optimal average outcomes for long time periods before it converges to .
The above discussion shows that the constraint as expressed in (2), may not be appropriate, if for example the sampling cost is a tangible amount that must be paid each time an observation is taken, and there is a budget per period for sampling. In this situation a policy may suggest exceeding the budget for long time periods and still be feasible, something that may not be viable in reality. In such cases it would be more realistic to impose a stricter average cost constraint, for example to require that (2) hold for all and not only in the limit.
5 Conclusion and Extensions
In this paper we developed a family of consistent adaptive policies for sequentially sampling from independent populations with unknown distributions under an asymptotic average cost constraint. The main idea in the development of this class of policies is to employ a sparse sequence of forced selection periods for each population, to ensure consistent estimation of all unknown means and in the remaining time periods employ the solution obtained from a linear programming problem that uses the estimates instead of the true values. We also performed a simulation study to compare the convergence rate for different policies in this class.
This work can be extended in several directions. First, as it was shown in Section 4, the asymptotic form of the cost constraint is in some sense weak, since it allows the average sampling cost to exceed the upper bound for arbitrarily long time periods and still be satisfied in the limit. A more appropriate, albeit more complex, model would be to require the cost constraint to be satisfied at all time points. The construction of consistent and, more importantly, efficient policies under this stricter version of the constraint is work currently in progress.
Another extension is towards the direction of Markov process control. Instead of assuming distinct independent populations with i.i.d. observations, one might consider an average reward Markovian Decision Process with unknown transition law and/or reward distributions, and one or more nonasymptotic side constraints on the average cost. In this case the problem is to construct consistent and, more importantly, efficient control policies, extending the results of Burnetas and Katehakis (1997) in the constrained case.
Acknowledgement
This research was supported by the Greek Secreteriat of Research and Technology under a Greece/Turkey bilateral research collaboration program. The authors thank Nickos Papadatos and George Afendras for useful discussions on the problem of consistent estimation in a random sequence of random variables.
References
- Auer et al. (2002) P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit. Machine Learning, 47:235–256, 2002.
- Burnetas and Katehakis (1996) A. N. Burnetas and M. N. Katehakis. Optimal adaptive policies for sequential allocation problems. Adv. App. Math., 17:122–142, 1996.
- Burnetas and Katehakis (1997) A. N. Burnetas and M. N. Katehakis. Optimal adaptive policies for markovian decision processes. Math. Oper. Res., 22(1):222–255, 1997.
- Katehakis and Robbins (1995) M. N. Katehakis and H. Robbins. Sequential choice from several populations. Proc.Natl.Acad.Sci. USA, 92:8584–8585, 1995.
- Kulkarni and Lugosi (2000) S. R. Kulkarni and G. Lugosi. Finite-time lower bounds for the two-armed bandit problem. IEEE Transactions on Automatic Control, 45:711–714, 2000.
- Lai and Robbins (1985) T. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Adv. App. Math., 6:4–22, 1985.
-
Madani et al. (2004)
O. Madani, D. Lizotte, and R. Greiner.
The budgeted multi-armed bandit problem.
Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
, 3120:643–645, 2004. - Pezeshk and Gittins (1999) H. Pezeshk and J. Gittins. Sample size determination in clinical trials. Student, 3(1):19–26, 1999.
-
Poznyak et al. (2000)
A. Poznyak, K. Nazim, and E. Gomez.
Self-Learning Control of Finite Markov Chains
. CRC Press, New York, 2000. - Robbins (1952) H. Robbins. Some aspects of the sequential design of experiments. Bull. Amer. Math. Monthly, 58:527–536, 1952.
- Wang (1991) Y. G. Wang. Gittins indices and constrained allocation in clinical trials. Biometrika, 78:101–111, 1991.
Comments
There are no comments yet.