Using Non-Stationary Bandits for Learning in Repeated Cournot Games with Non-Stationary Demand

01/03/2022
by   Kshitija Taywade, et al.
0

Many past attempts at modeling repeated Cournot games assume that demand is stationary. This does not align with real-world scenarios in which market demands can evolve over a product's lifetime for a myriad of reasons. In this paper, we model repeated Cournot games with non-stationary demand such that firms/agents face separate instances of non-stationary multi-armed bandit problem. The set of arms/actions that an agent can choose from represents discrete production quantities; here, the action space is ordered. Agents are independent and autonomous, and cannot observe anything from the environment; they can only see their own rewards after taking an action, and only work towards maximizing these rewards. We propose a novel algorithm 'Adaptive with Weighted Exploration (AWE) ϵ-greedy' which is remotely based on the well-known ϵ-greedy approach. This algorithm detects and quantifies changes in rewards due to varying market demand and varies learning rate and exploration rate in proportion to the degree of changes in demand, thus enabling agents to better identify new optimal actions. For efficient exploration, it also deploys a mechanism for weighing actions that takes advantage of the ordered action space. We use simulations to study the emergence of various equilibria in the market. In addition, we study the scalability of our approach in terms number of total agents in the system and the size of action space. We consider both symmetric and asymmetric firms in our models. We found that using our proposed method, agents are able to swiftly change their course of action according to the changes in demand, and they also engage in collusive behavior in many simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/01/2022

Modelling Cournot Games as Multi-agent Multi-armed Bandits

We investigate the use of a multi-agent multi-armed bandit (MA-MAB) sett...
research
08/04/2022

Learning the Trading Algorithm in Simulated Markets with Non-stationary Continuum Bandits

The basic Multi-Armed Bandits (MABs) problem is trying to maximize the r...
research
05/31/2022

Decentralized Competing Bandits in Non-Stationary Matching Markets

Understanding complex dynamics of two-sided online matching markets, whe...
research
02/15/2023

VDHLA: Variable Depth Hybrid Learning Automaton and Its Application to Defense Against the Selfish Mining Attack in Bitcoin

Learning Automaton (LA) is an adaptive self-organized model that improve...
research
03/12/2023

Energy Regularized RNNs for Solving Non-Stationary Bandit Problems

We consider a Multi-Armed Bandit problem in which the rewards are non-st...
research
02/14/2023

Non-stationary Contextual Bandits and Universal Learning

We study the fundamental limits of learning in contextual bandits, where...
research
10/18/2019

Autonomous exploration for navigating in non-stationary CMPs

We consider a setting in which the objective is to learn to navigate in ...

Please sign up or login with your details

Forgot password? Click here to reset