DeepAI AI Chat
Log In Sign Up

Thompson Sampling for Contextual Bandit Problems with Auxiliary Safety Constraints

11/02/2019
by   Samuel (Sam) Daulton, et al.
0

Recent advances in contextual bandit optimization and reinforcement learning have garnered interest in applying these methods to real-world sequential decision making problems. Real-world applications frequently have constraints with respect to a currently deployed policy. Many of the existing constraint-aware algorithms consider problems with a single objective (the reward) and a constraint on the reward with respect to a baseline policy. However, many important applications involve multiple competing objectives and auxiliary constraints. In this paper, we propose a novel Thompson sampling algorithm for multi-outcome contextual bandit problems with auxiliary constraints. We empirically evaluate our algorithm on a synthetic problem. Lastly, we apply our method to a real world video transcoding problem and provide a practical way for navigating the trade-off between safety and performance using Bayesian optimization.

READ FULL TEXT
10/21/2019

IPO: Interior-point Policy Optimization under Constraints

In this paper, we study reinforcement learning (RL) algorithms to solve ...
08/26/2020

Bandit Data-driven Optimization: AI for Social Good and Beyond

The use of machine learning (ML) systems in real-world applications enta...
12/28/2022

Lexicographic Multi-Objective Reinforcement Learning

In this work we introduce reinforcement learning techniques for solving ...
11/23/2021

Best Arm Identification with Safety Constraints

The best arm identification problem in the multi-armed bandit setting is...
08/06/2021

Joint AP Probing and Scheduling: A Contextual Bandit Approach

We consider a set of APs with unknown data rates that cooperatively serv...
06/09/2022

Conformal Off-Policy Prediction in Contextual Bandits

Most off-policy evaluation methods for contextual bandits have focused o...
06/16/2020

Off-policy Bandits with Deficient Support

Learning effective contextual-bandit policies from past actions of a dep...