Learning in Random Utility Models Via Online Decision Problems

12/21/2021
by   Emerson Melo, et al.
0

This paper studies the Random Utility Model (RUM) in environments where the decision maker is imperfectly informed about the payoffs associated to each of the alternatives he faces. By embedding the RUM into an online decision problem, we make four contributions. First, we propose a gradient-based learning algorithm and show that a large class of RUMs are Hannan consistent (<cit.>); that is, the average difference between the expected payoffs generated by a RUM and that of the best fixed policy in hindsight goes to zero as the number of periods increase. Second, we show that the class of Generalized Extreme Value (GEV) models can be implemented with our learning algorithm. Examples in the GEV class include the Nested Logit, Ordered, and Product Differentiation models among many others. Third, we show that our gradient-based algorithm is the dual, in a convex analysis sense, of the Follow the Regularized Leader (FTRL) algorithm, which is widely used in the Machine Learning literature. Finally, we discuss how our approach can incorporate recency bias and be used to implement prediction markets in general environments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro