Dynamic social learning under graph constraints

07/08/2020
by   Konstantin Avrachenkov, et al.
0

We argue that graph-constrained dynamic choice with reinforcement can be viewed as a scaled version of a special instance of replicator dynamics. The latter also arises as the limiting differential equation for the empirical measures of a vertex reinforced random walk on a directed graph. We use this equivalence to show that for a class of positively α-homogeneous rewards, α > 0, the asymptotic outcome concentrates around the optimum in a certain limiting sense when `annealed' by letting α↑∞ slowly. We also discuss connections with classical simulated annealing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2018

The cover time of a biased random walk on a random regular graph of odd degree

We consider a random walk process which prefers to visit previously unvi...
research
12/19/2016

Random Walk Models of Network Formation and Sequential Monte Carlo Methods for Graphs

We introduce a class of network models that insert edges by connecting t...
research
11/02/2022

Time-aware Random Walk Diffusion to Improve Dynamic Graph Learning

How can we augment a dynamic graph for improving the performance of dyna...
research
03/08/2022

On the elephant random walk with stops playing hide and seek with the Mittag-Leffler distribution

The aim of this paper is to investigate the asymptotic behavior of the s...
research
05/11/2021

Optimal Seat Allocation Under Social Distancing Constraints

The Covid-19 pandemic introduces new challenges and constraints for retu...
research
01/25/2021

Rapid mixing in unimodal landscapes and efficient simulatedannealing for multimodal distributions

We consider nearest neighbor weighted random walks on the d-dimensional ...
research
06/01/2011

Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP Search Techniques in Graphplan

This paper reviews the connections between Graphplan's planning-graph an...

Please sign up or login with your details

Forgot password? Click here to reset