DeepAI AI Chat
Log In Sign Up

Inverse Reinforcement Learning for Marketing

12/13/2017
by   Igor Halperin, et al.
0

Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an alternative approach to study dynamic consumer demand, based on Inverse Reinforcement Learning (IRL). We develop a version of the Maximum Entropy IRL that leads to a highly tractable model formulation that amounts to low-dimensional convex optimization in the search for optimal model parameters. Using simulations of consumer demand, we show that observational noise for identical customers can be easily confused with an apparent consumer heterogeneity.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/09/2014

Probabilistic inverse reinforcement learning in unknown environments

We consider the problem of learning by demonstration from agents acting ...
09/06/2022

RMM: An R Package for Customer Choice-Based Revenue Management Models for Sales Transaction Data

We develop an R package RMM to implement a Conditional Logit (CL) model ...
05/25/2021

Trajectory Modeling via Random Utility Inverse Reinforcement Learning

We consider the problem of modeling trajectories of drivers in a road ne...
02/18/2022

Can Interpretable Reinforcement Learning Manage Assets Your Way?

Personalisation of products and services is fast becoming the driver of ...
12/06/2022

Misspecification in Inverse Reinforcement Learning

The aim of Inverse Reinforcement Learning (IRL) is to infer a reward fun...
09/22/2017

Inverse Reinforcement Learning with Conditional Choice Probabilities

We make an important connection to existing results in econometrics to d...
07/19/2020

Same-Day Delivery with Fairness

The demand for same-day delivery (SDD) has increased rapidly in the last...