DeepAI AI Chat
Log In Sign Up

Scalable Discrete Sampling as a Multi-Armed Bandit Problem

by   Yutian Chen, et al.

Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.


page 1

page 2

page 3

page 4


Combinatorial Multi-armed Bandits for Real-Time Strategy Games

Games with large branching factors pose a significant challenge for game...

Thompson Sampling for Contextual Bandits with Linear Payoffs

Thompson Sampling is one of the oldest heuristics for multi-armed bandit...

Sequential Monte Carlo Bandits

In this paper we propose a flexible and efficient framework for handling...

Nonparametric Bayesian multi-armed bandits for single cell experiment design

The problem of maximizing cell type discovery under budget constraints i...

A Multi-armed Bandit MCMC, with applications in sampling from doubly intractable posterior

Markov chain Monte Carlo (MCMC) algorithms are widely used to sample fro...

Approximate Function Evaluation via Multi-Armed Bandits

We study the problem of estimating the value of a known smooth function ...

Graphical Models for Bandit Problems

We introduce a rich class of graphical models for multi-armed bandit pro...