Conservative Exploration for Semi-Bandits with Linear Generalization: A Product Selection Problem for Urban Warehouses

03/19/2019
by   Rong Jin, et al.
0

The recent rising popularity of ultra-fast delivery services on retail platforms fuels the increasing use of urban warehouses, whose proximity to customers makes fast deliveries viable. The space limit in urban warehouses poses a problem for the online retailers: the number of products (SKUs) they carry is no longer "the more, the better", yet it can still be significantly large, reaching hundreds or thousands in a product category. In this paper, we study algorithms for dynamically identifying a large number of products (i.e., SKUs) with top customer purchase probabilities on the fly, from an ocean of potential products to offer on retailers' ultra-fast delivery platforms. We distill the product selection problem into a semi-bandit model with linear generalization. There are in total N different arms, each with a feature vector of dimension d. The player pulls K arms in each period and observes the bandit feedback from each of the pulled arms. We focus on the setting where K is much greater than the number of total time periods T or the dimension of product features d. We first analyze a standard UCB algorithm and show its regret bound can be expressed as the sum of a T-independent part Õ(K d^3/2) and a T-dependent part Õ(d√(KT)), which we refer to as "fixed cost" and "variable cost" respectively. To reduce the fixed cost for large K values, we propose a novel online learning algorithm, with more conservative exploration steps, and show its fixed cost is reduced by a factor of d to Õ(K √(d)). Moreover, we test the algorithms on an industrial dataset from Alibaba Group. Experimental results show that our new algorithm reduces the total regret of the standard UCB algorithm by at least 10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2022

Batch-Size Independent Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms or Independent Arms

In this paper, we study the combinatorial semi-bandits (CMAB) and focus ...
research
10/01/2022

Speed Up the Cold-Start Learning in Two-Sided Bandits with Many Arms

Multi-armed bandit (MAB) algorithms are efficient approaches to reduce t...
research
06/02/2021

MNL-Bandit with Knapsacks

We consider a dynamic assortment selection problem where a seller has a ...
research
06/01/2017

Scalable Generalized Linear Bandits: Online Computation and Hashing

Generalized Linear Bandits (GLBs), a natural extension of the stochastic...
research
06/03/2021

Sleeping Combinatorial Bandits

In this paper, we study an interesting combination of sleeping and combi...
research
07/11/2022

Adaptive Behavioral Model Learning for Software Product Lines

Behavioral models enable the analysis of the functionality of software p...

Please sign up or login with your details

Forgot password? Click here to reset