Using Reinforcement Learning to Study Platform Economies under Market Shocks
Driven by rapid digitization and expansive internet access, market-driven platforms (e.g., Amazon, DoorDash, Uber, TaskRabbit) are increasingly prevalent and becoming key drivers of the economy. Across many industries, platforms leverage digital infrastructure to efficiently match producers and consumers, dynamically set prices, and enable economies of scale. This increasing prominence makes it important to understand the behavior of platforms, which induces complex phenomenon especially in the presence of severe market shocks (e.g., during pandemics). In this work, we develop a multi-agent simulation environment to capture key elements of a platform economy, including the kinds of economic shocks that disrupt a traditional, off-platform market. We use deep reinforcement learning (RL) to model the pricing and matching behavior of a platform that optimizes for revenue and various socially-aware objectives. We start with tractable motivating examples to establish intuitions about the dynamics and function of optimal platform policies. We then conduct extensive empirical simulations on multi-period environments, including settings with market shocks. We characterize the effect of a platform on the efficiency and resilience of an economic system under different platform design objectives. We further analyze the consequences of regulation fixing platform fees, and study the alignment of a revenue-maximizing platform with social welfare under different platform matching policies. As such, our RL-based framework provides a foundation for understanding platform economies under different designs and for yielding new economic insights that are beyond analytical tractability.
READ FULL TEXT