Distributed linear regression by averaging

by   Edgar Dobriban, et al.

Modern massive datasets pose an enormous computational burden to practitioners. Distributed computation has emerged as a universal approach to ease the burden: Datasets are partitioned over machines, which compute locally, and communicate short messages. Distributed data also arises due to privacy reasons, such as in medicine. It is important to study how to do statistical inference and machine learning in a distributed setting. In this paper, we study one-step parameter averaging in statistical linear models under data parallelism. We do linear regression on each machine, and take a weighted average of the parameters. How much do we lose compared to doing linear regression on the full data? Here we study the performance loss in estimation error, test error, and confidence interval length in high dimensions, where the number of parameters is comparable to the training data size. We discover several key phenomena. First, averaging is not optimal, and we find the exact performance loss. Our results are simple to use in practice. Second, different problems are affected differently by the distributed framework. Estimation error and confidence interval length increases a lot, while prediction error increases much less. These results match simulations and a data analysis example. We rely on recent results from random matrix theory, where we develop a new calculus of deterministic equivalents as a tool of broader interest.


page 1

page 2

page 3

page 4


On the Optimality of Averaging in Distributed Statistical Learning

A common approach to statistical learning with big-data is to randomly s...

Generalization Error for Linear Regression under Distributed Learning

Distributed learning facilitates the scaling-up of data processing by di...

One-shot distributed ridge regression in high dimensions

In many areas, practitioners need to analyze large datasets that challen...

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...

Linear Regression with Distributed Learning: A Generalization Error Perspective

Distributed learning provides an attractive framework for scaling the le...

Selective Inference with Distributed Data

Nowadays, big datasets are spread over many machines which compute in pa...

Analyzing statistical and computational tradeoffs of estimation procedures

The recent explosion in the amount and dimensionality of data has exacer...

Please sign up or login with your details

Forgot password? Click here to reset