DeepAI AI Chat
Log In Sign Up

Thompson Sampling for Contextual Bandit Problems with Auxiliary Safety Constraints

by   Samuel (Sam) Daulton, et al.

Recent advances in contextual bandit optimization and reinforcement learning have garnered interest in applying these methods to real-world sequential decision making problems. Real-world applications frequently have constraints with respect to a currently deployed policy. Many of the existing constraint-aware algorithms consider problems with a single objective (the reward) and a constraint on the reward with respect to a baseline policy. However, many important applications involve multiple competing objectives and auxiliary constraints. In this paper, we propose a novel Thompson sampling algorithm for multi-outcome contextual bandit problems with auxiliary constraints. We empirically evaluate our algorithm on a synthetic problem. Lastly, we apply our method to a real world video transcoding problem and provide a practical way for navigating the trade-off between safety and performance using Bayesian optimization.


IPO: Interior-point Policy Optimization under Constraints

In this paper, we study reinforcement learning (RL) algorithms to solve ...

Bandit Data-driven Optimization: AI for Social Good and Beyond

The use of machine learning (ML) systems in real-world applications enta...

Lexicographic Multi-Objective Reinforcement Learning

In this work we introduce reinforcement learning techniques for solving ...

Best Arm Identification with Safety Constraints

The best arm identification problem in the multi-armed bandit setting is...

Joint AP Probing and Scheduling: A Contextual Bandit Approach

We consider a set of APs with unknown data rates that cooperatively serv...

Conformal Off-Policy Prediction in Contextual Bandits

Most off-policy evaluation methods for contextual bandits have focused o...

Off-policy Bandits with Deficient Support

Learning effective contextual-bandit policies from past actions of a dep...