Online and Scalable Model Selection with Multi-Armed Bandits

01/25/2021
by   Jiayi Xie, et al.
0

Many online applications running on live traffic are powered by machine learning models, for which training, validation, and hyper-parameter tuning are conducted on historical data. However, it is common for models demonstrating strong performance in offline analysis to yield poorer performance when deployed online. This problem is a consequence of the difficulty of training on historical data in non-stationary environments. Moreover, the machine learning metrics used for model selection may not sufficiently correlate with real-world business metrics used to determine the success of the applications being tested. These problems are particularly prominent in the Real-Time Bidding (RTB) domain, in which ML models power bidding strategies, and a change in models will likely affect performance of the advertising campaigns. In this work, we present Automatic Model Selector (AMS), a system for scalable online selection of RTB bidding strategies based on real-world performance metrics. AMS employs Multi-Armed Bandits (MAB) to near-simultaneously run and evaluate multiple models against live traffic, allocating the most traffic to the best-performing models while decreasing traffic to those with poorer online performance, thereby minimizing the impact of inferior models on overall campaign performance. The reliance on offline data is avoided, instead making model selections on a case-by-case basis according to actionable business goals. AMS allows new models to be safely introduced into live campaigns as soon as they are developed, minimizing the risk to overall performance. In live-traffic tests on multiple ad campaigns, the AMS system proved highly effective at improving ad campaign performance.

READ FULL TEXT
research
12/07/2022

Stochastic Rising Bandits

This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e...
research
11/22/2020

Applying Multi-armed Bandit Algorithms to Computational Advertising

Over the last two decades, we have seen extensive industrial research in...
research
08/21/2023

Cost-Efficient Online Decision Making: A Combinatorial Multi-Armed Bandit Approach

Online decision making plays a crucial role in numerous real-world appli...
research
06/01/2020

Dynamic Bidding Strategies with Multivariate Feedback Control for Multiple Goals in Display Advertising

Real-Time Bidding (RTB) display advertising is a method for purchasing d...
research
09/16/2021

Field Study in Deploying Restless Multi-Armed Bandits: Assisting Non-Profits in Improving Maternal and Child Health

The widespread availability of cell phones has enabled non-profits to de...
research
04/05/2023

Practical Lessons on Optimizing Sponsored Products in eCommerce

In this paper, we study multiple problems from sponsored product optimiz...
research
08/24/2017

Ease.ml: Towards Multi-tenant Resource Sharing for Machine Learning Workloads

We present ease.ml, a declarative machine learning service platform we b...

Please sign up or login with your details

Forgot password? Click here to reset