On Increasing Self-Confidence in Non-Bayesian Social Learning over Time-Varying Directed Graphs

12/24/2018
by   César A. Uribe, et al.
0

We study the convergence of the log-linear non-Bayesian social learning update rule, for a group of agents that collectively seek to identify a parameter that best describes a joint sequence of observations. Contrary to recent literature, we focus on the case where agents assign decaying weights to its neighbors, and the network is not connected at every time instant but over some finite time intervals. We provide a necessary and sufficient condition for the rate at which agents decrease the weights and still guarantees social learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2019

Non-Bayesian Social Learning with Uncertain Models over Time-Varying Directed Graphs

We study the problem of non-Bayesian social learning with uncertain mode...
research
01/06/2018

Bayesian Social Learning in a Dynamic Environment

Bayesian agents learn about a moving target, such as a commodity price, ...
research
02/13/2022

Reduced-Form Allocations with Complementarity: A 2-Person Case

We investigate the implementation of reduced-form allocation probabiliti...
research
07/04/2022

Reaching optimal distributed estimation through myopic self-confidence adaptation

Consider discrete-time linear distributed averaging dynamics, whereby ag...
research
09/30/2015

Learning without Recall: A Case for Log-Linear Learning

We analyze a model of learning and belief formation in networks in which...
research
09/23/2016

A Tutorial on Distributed (Non-Bayesian) Learning: Problem, Algorithms and Results

We overview some results on distributed learning with focus on a family ...
research
12/05/2022

Learning Trust Over Directed Graphs in Multiagent Systems (extended version)

We address the problem of learning the legitimacy of other agents in a m...

Please sign up or login with your details

Forgot password? Click here to reset