Adaptation in Online Social Learning

03/04/2020
by   Virginia Bordignon, et al.
0

This work studies social learning under non-stationary conditions. Although designed for online inference, classic social learning algorithms perform poorly under drifting conditions. To mitigate this drawback, we propose the Adaptive Social Learning (ASL) strategy. This strategy leverages an adaptive Bayesian update, where the adaptation degree can be modulated by tuning a suitable step-size parameter. The learning performance of the ASL algorithm is examined by means of a steady-state analysis. It is shown that, under the regime of small step-sizes: i) consistent learning is possible; ii) an accurate prediction of the performance can be furnished in terms of a Gaussian approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/06/2020

Adaptive Social Learning

This work proposes a novel strategy for social learning by introducing t...
research
04/10/2018

TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic Meta-descent

In this paper, we introduce a method for adapting the step-sizes of temp...
research
03/14/2022

Optimal Aggregation Strategies for Social Learning over Graphs

Adaptive social learning is a useful tool for studying distributed decis...
research
07/17/2019

Meta-descent for Online, Continual Prediction

This paper investigates different vector step-size adaptation approaches...
research
06/01/2019

Adaptation and learning over networks under subspace constraints – Part II: Performance Analysis

Part I of this paper considered optimization problems over networks wher...
research
04/04/2020

Tracking Performance of Online Stochastic Learners

The utilization of online stochastic algorithms is popular in large-scal...
research
07/23/2020

Online Robust and Adaptive Learning from Data Streams

In online learning from non-stationary data streams, it is both necessar...

Please sign up or login with your details

Forgot password? Click here to reset