Monte Carlo Rollout Policy for Recommendation Systems with Dynamic User Behavior

02/08/2021
by   Rahul Meshram, et al.
0

We model online recommendation systems using the hidden Markov multi-state restless multi-armed bandit problem. To solve this we present Monte Carlo rollout policy. We illustrate numerically that Monte Carlo rollout policy performs better than myopic policy for arbitrary transition dynamics with no specific structure. But, when some structure is imposed on the transition dynamics, myopic policy performs better than Monte Carlo rollout policy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2021

Indexability and Rollout Policy for Multi-State Partially Observable Restless Bandits

Restless multi-armed bandits with partially observable states has applic...
research
04/15/2019

Cooperation on the monte carlo rule Prison's dilemma game on the grid

In this paper, we investigate the prison's dilemma game with monte carlo...
research
10/04/2013

Sequential Monte Carlo Bandits

In this paper we propose a flexible and efficient framework for handling...
research
10/19/2012

Monte Carlo Matrix Inversion Policy Evaluation

In 1950, Forsythe and Leibler (1950) introduced a statistical technique ...
research
05/13/2014

Adaptive Monte Carlo via Bandit Allocation

We consider the problem of sequentially choosing between a set of unbias...
research
06/17/2022

Accelerated Kinetic Monte Carlo methods for general nonlocal traffic flow models

This paper presents a class of one-dimensional cellular automata (CA) mo...
research
05/26/2018

Evaluating Impact of Human Errors on the Availability of Data Storage Systems

In this paper, we investigate the effect of incorrect disk replacement s...

Please sign up or login with your details

Forgot password? Click here to reset