A Communication-Efficient Algorithm for Exponentially Fast Non-Bayesian Learning in Networks

09/04/2019
by   Aritra Mitra, et al.
0

We introduce a simple time-triggered protocol to achieve communication-efficient non-Bayesian learning over a network. Specifically, we consider a scenario where a group of agents interact over a graph with the aim of discerning the true state of the world that generates their joint observation profiles. To address this problem, we propose a novel distributed learning rule wherein agents aggregate neighboring beliefs based on a min-protocol, and the inter-communication intervals grow geometrically at a rate a ≥ 1. Despite such sparse communication, we show that each agent is still able to rule out every false hypothesis exponentially fast with probability 1, as long as a is finite. For the special case when communication occurs at every time-step, i.e., when a=1, we prove that the asymptotic learning rates resulting from our algorithm are network-structure independent, and a strict improvement upon those existing in the literature. In contrast, when a>1, our analysis reveals that the asymptotic learning rates vary across agents, and exhibit a non-trivial dependence on the network topology coupled with the relative entropies of the agents' likelihood models. This motivates us to consider the problem of allocating signal structures to agents to maximize appropriate performance metrics. In certain special cases, we show that the eccentricity centrality and the decay centrality of the underlying graph help identify optimal allocations; for more general scenarios, we bound the deviation from the optimal allocation as a function of the parameter a, and the diameter of the communication graph.

READ FULL TEXT
research
07/05/2019

A New Approach to Distributed Hypothesis Testing and Non-Bayesian Learning: Improved Learning Rate and Byzantine-Resilience

We study a setting where a group of agents, each receiving partially inf...
research
03/14/2019

A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience

We study a setting where a group of agents, each receiving partially inf...
research
04/02/2020

Event-Triggered Distributed Inference

We study a setting where each agent in a network receives certain privat...
research
04/28/2022

On the Arithmetic and Geometric Fusion of Beliefs for Distributed Inference

We study the asymptotic learning rates under linear and log-linear combi...
research
04/10/2017

Distributed Learning for Cooperative Inference

We study the problem of cooperative inference where a group of agents in...
research
05/23/2022

Agreement and Statistical Efficiency in Bayesian Perception Models

Bayesian models of group learning are studied in Economics since the 197...
research
07/31/2023

Preserving Topology of Network Systems: Metric, Analysis, and Optimal Design

Preserving the topology from being inferred by external adversaries has ...

Please sign up or login with your details

Forgot password? Click here to reset