Distributed Estimation via Network Regularization

10/28/2019
by   Lingzhou Hong, et al.
0

We propose a new method for distributed estimation of a linear model by a network of local learners with heterogeneously distributed datasets. Unlike other ensemble learning methods, in the proposed method, model averaging is done continuously over time in a distributed and asynchronous manner. To ensure robust estimation, a network regularization term which penalizes models with high local variability is used. We provide a finite-time characterization of convergence of the weighted ensemble average and compare this result to centralized estimation. We illustrate the general applicability of the method in two examples: estimation of a Markov random field using wireless sensor networks and modeling prey escape behavior of birds based on a real-world dataset.

READ FULL TEXT
research
10/25/2017

Model Averaging for Generalized Linear Model with Covariates that are Missing completely at Random

In this paper, we consider the estimation of generalized linear models w...
research
11/21/2019

Regularizing Neural Networks by Stochastically Training Layer Ensembles

Dropout and similar stochastic neural network regularization methods are...
research
05/20/2020

Consensus Driven Learning

As the complexity of our neural network models grow, so too do the data ...
research
05/01/2018

Consensus-based Distributed Quantile Estimation in Sensor Networks

A quantile is defined as a value below which random draws from a given d...
research
06/13/2018

Ensemble Pruning based on Objection Maximization with a General Distributed Framework

Ensemble pruning, selecting a subset of individual learners from an orig...
research
08/31/2022

Inference of Mixed Graphical Models for Dichotomous Phenotypes using Markov Random Field Model

In this article, we propose a new method named fused mixed graphical mod...

Please sign up or login with your details

Forgot password? Click here to reset