Performance Analysis of Trial and Error Algorithms
Model-free decentralized optimizations and learning are receiving increasing attention from theoretical and practical perspectives. In particular, two fully decentralized learning algorithms, namely Trial and Error (TEL) and Optimal Dynamical Learning (ODL), are very appealing for a broad class of games. In fact, ODL has the property to spend a high proportion of time in an optimum state that maximizes the sum of utility of all players. And the TEL has the property to spend a high proportion of time in an optimum state that maximizes the sum of utility of all players if there is a Pure Nash Equilibrium (PNE), otherwise, it spends a high proportion of time in an optimum state that maximizes a tradeoff between the sum of utility of all players and a predefined stability function. On the other hand, estimating the mean fraction of time spent in the optimum state (as well as the mean time duration to reach it) is challenging due to the high complexity and dimension of the inherent Markov Chains. In this paper, under some specific system model, an evaluation of the above performance metrics is provided by proposing an approximation of the considered Markov chains, which allows overcoming the problem of high dimensionality. A comparison between the two algorithms is then performed which allows a better understanding of their performances.
READ FULL TEXT