Distributed Estimation, Information Loss and Exponential Families

10/09/2014
by   Qiang Liu, et al.
0

Distributed learning of probabilistic models from multiple data repositories with minimum communication is increasingly important. We study a simple communication-efficient learning framework that first calculates the local maximum likelihood estimates (MLE) based on the data subsets, and then combines the local MLEs to achieve the best possible approximation to the global MLE given the whole dataset. We study this framework's statistical properties, showing that the efficiency loss compared to the global setting relates to how much the underlying distribution families deviate from full exponential families, drawing connection to the theory of information loss by Fisher, Rao and Efron. We show that the "full-exponential-family-ness" represents the lower bound of the error rate of arbitrary combinations of local MLEs, and is achieved by a KL-divergence-based combination method but not by a more common linear combination method. We also study the empirical properties of both methods, showing that the KL method significantly outperforms linear combination in practical settings with issues such as model misspecification, non-convexity, and heterogeneous data partitions.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/23/2020

Maximizing the Bregman divergence from a Bregman family

The problem to maximize the information divergence from an exponential f...
04/10/2022

Rethinking Exponential Averaging of the Fisher

In optimization for Machine learning (ML), it is typical that curvature-...
05/19/2013

Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families

We study online learning under logarithmic loss with regular parametric ...
10/31/2009

Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity

The versatility of exponential families, along with their attendant conv...
12/31/2020

Likelihood Ratio Exponential Families

The exponential family is well known in machine learning and statistical...
01/18/2022

Bregman Deviations of Generic Exponential Families

We revisit the method of mixture technique, also known as the Laplace me...
02/14/2011

Chernoff information of exponential families

Chernoff information upper bounds the probability of error of the optima...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.