Ranking and Selection as Stochastic Control

10/07/2017
by   Yijie Peng, et al.
0

Under a Bayesian framework, we formulate the fully sequential sampling and selection decision in statistical ranking and selection as a stochastic control problem, and derive the associated Bellman equation. Using value function approximation, we derive an approximately optimal allocation policy. We show that this policy is not only computationally efficient but also possesses both one-step-ahead and asymptotic optimality for independent normal sampling distributions. Moreover, the proposed allocation policy is easily generalizable in the approximate dynamic programming paradigm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2023

Efficient Dynamic Allocation Policy for Robust Ranking and Selection under Stochastic Control Framework

This research considers the ranking and selection with input uncertainty...
research
11/30/2021

Asymptotically Optimal Sampling Policy for Selecting Top-m Alternatives

We consider selecting the top-m alternatives from a finite number of alt...
research
10/08/2022

Order Selection Problems in Hiring Pipelines

Motivated by hiring pipelines, we study two order selection problems in ...
research
02/04/2023

Getting to "rate-optimal” in ranking selection

In their 2004 seminal paper, Glynn and Juneja formally and precisely est...
research
07/15/2022

Selection of the Most Probable Best

We consider an expected-value ranking and selection problem where all k ...
research
12/15/2022

Rollout Algorithms and Approximate Dynamic Programming for Bayesian Optimization and Sequential Estimation

We provide a unifying approximate dynamic programming framework that app...
research
06/12/2019

Knowledge Gradient for Selection with Covariates: Consistency and Computation

Knowledge gradient is a design principle for developing Bayesian sequent...

Please sign up or login with your details

Forgot password? Click here to reset