Asynchronous Bayesian Learning over a Network
We present a practical asynchronous data fusion model for networked agents to perform distributed Bayesian learning without sharing raw data. Our algorithm uses a gossip-based approach where pairs of randomly selected agents employ unadjusted Langevin dynamics for parameter sampling. We also introduce an event-triggered mechanism to further reduce communication between gossiping agents. These mechanisms drastically reduce communication overhead and help avoid bottlenecks commonly experienced with distributed algorithms. In addition, the reduced link utilization by the algorithm is expected to increase resiliency to occasional link failure. We establish mathematical guarantees for our algorithm and demonstrate its effectiveness via numerical experiments.
READ FULL TEXT