The Platform Design Problem

09/13/2020
by   Christos Papadimitriou, et al.
0

On-line firms deploy suites of software platforms, where each platform is designed to interact with users during a certain activity, such as browsing, chatting, socializing, emailing, driving, etc. The economic and incentive structure of this exchange, as well as its algorithmic nature, have not been explored to our knowledge; we initiate their study in this paper. We model this interaction as a Stackelberg game between a Designer and one or more Agents. We model an Agent as a Markov chain whose states are activities; we assume that the Agent's utility is a linear function of the steady-state distribution of this chain. The Designer may design a platform for each of these activities/states; if a platform is adopted by the Agent, the transition probabilities of the Markov chain are affected, and so is the objective of the Agent. The Designer's utility is a linear function of the steady state probabilities of the accessible states (that is, the ones for which the platform has been adopted), minus the development cost of the platforms. The underlying optimization problem of the Agent – that is, how to choose the states for which to adopt the platform – is an MDP. If this MDP has a simple yet plausible structure (the transition probabilities from one state to another only depend on the target state and the recurrent probability of the current state) the Agent's problem can be solved by a greedy algorithm. The Designer's optimization problem (designing a custom suite for the Agent so as to optimize, through the Agent's optimum reaction, the Designer's revenue), while NP-hard, has an FPTAS. These results generalize, under mild additional assumptions, from a single Agent to a distribution of Agents with finite support. The Designer's optimization problem has abysmal "price of robustness", suggesting that learning the parameters of the problem is crucial for the Designer.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

12/04/2020

Decentralized State-Dependent Markov Chain Synthesis for Swarm Guidance

This paper introduces a decentralized state-dependent Markov chain synth...
11/15/2019

A Generalized Markov Chain Model to Capture Dynamic Preferences and Choice Overload

Assortment optimization is an important problem that arises in many prac...
02/11/2020

Infinity Learning: Learning Markov Chains from Aggregate Steady-State Observations

We consider the task of learning a parametric Continuous Time Markov Cha...
03/09/2021

Selling Data to an Agent with Endogenous Information

We consider the model of the data broker selling information to a single...
04/02/2022

Hybrid Transfer in Deep Reinforcement Learning for Ads Allocation

Ads allocation, that allocates ads and organic items to limited slots in...
04/26/2021

Multi-resource allocation for federated settings: A non-homogeneous Markov chain model

In a federated setting, agents coordinate with a central agent or a serv...
06/05/2021

Controller Synthesis for Omega-Regular and Steady-State Specifications

Given a Markov decision process (MDP) and a linear-time (ω-regular or LT...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.