Distributed Estimation, Information Loss and Exponential Families

by   Qiang Liu, et al.

Distributed learning of probabilistic models from multiple data repositories with minimum communication is increasingly important. We study a simple communication-efficient learning framework that first calculates the local maximum likelihood estimates (MLE) based on the data subsets, and then combines the local MLEs to achieve the best possible approximation to the global MLE given the whole dataset. We study this framework's statistical properties, showing that the efficiency loss compared to the global setting relates to how much the underlying distribution families deviate from full exponential families, drawing connection to the theory of information loss by Fisher, Rao and Efron. We show that the "full-exponential-family-ness" represents the lower bound of the error rate of arbitrary combinations of local MLEs, and is achieved by a KL-divergence-based combination method but not by a more common linear combination method. We also study the empirical properties of both methods, showing that the KL method significantly outperforms linear combination in practical settings with issues such as model misspecification, non-convexity, and heterogeneous data partitions.



There are no comments yet.


page 1

page 2

page 3

page 4


Maximizing the Bregman divergence from a Bregman family

The problem to maximize the information divergence from an exponential f...

Rethinking Exponential Averaging of the Fisher

In optimization for Machine learning (ML), it is typical that curvature-...

Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families

We study online learning under logarithmic loss with regular parametric ...

Learning Exponential Families in High-Dimensions: Strong Convexity and Sparsity

The versatility of exponential families, along with their attendant conv...

Likelihood Ratio Exponential Families

The exponential family is well known in machine learning and statistical...

Bregman Deviations of Generic Exponential Families

We revisit the method of mixture technique, also known as the Laplace me...

Chernoff information of exponential families

Chernoff information upper bounds the probability of error of the optima...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.