Satisficing in multi-armed bandit problems

12/23/2015
by   Paul Reverdy, et al.
0

Satisficing is a relaxation of maximizing and allows for less risky decision making in the face of uncertainty. We propose two sets of satisficing objectives for the multi-armed bandit problem, where the objective is to achieve reward-based decision-making performance above a given threshold. We show that these new problems are equivalent to various standard multi-armed bandit problems with maximizing objectives and use the equivalence to find bounds on performance. The different objectives can result in qualitatively different behavior; for example, agents explore their options continually in one case and only a finite number of times in another. For the case of Gaussian rewards we show an additional equivalence between the two sets of satisficing objectives that allows algorithms developed for one set to be applied to the other. We then develop variants of the Upper Credible Limit (UCL) algorithm that solve the problems with satisficing objectives and show that these modified UCL algorithms achieve efficient satisficing performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/23/2013

Modeling Human Decision-making in Generalized Gaussian Multi-armed Bandits

We present a formal model of human decision-making in explore-exploit ta...
research
10/12/2022

Parallel photonic accelerator for decision making using optical spatiotemporal chaos

Photonic accelerators have attracted increasing attention in artificial ...
research
04/12/2018

Entangled photons for competitive multi-armed bandit problem: achievement of maximum social reward, equality, and deception prevention

The competitive multi-armed bandit (CMAB) problem is related to social i...
research
11/04/2019

Optimistic Optimization for Statistical Model Checking with Regret Bounds

We explore application of multi-armed bandit algorithms to statistical m...
research
10/04/2022

ProtoBandit: Efficient Prototype Selection via Multi-Armed Bandits

In this work, we propose a multi-armed bandit based framework for identi...
research
05/19/2022

Multi-Armed Bandits in Brain-Computer Interfaces

The multi-armed bandit (MAB) problem models a decision-maker that optimi...
research
02/15/2022

End-to-end Automatic Logic Optimization Exploration via Domain-specific Multi-armed Bandit

Recent years have seen increasing employment of decision intelligence in...

Please sign up or login with your details

Forgot password? Click here to reset