Distributed linear regression by averaging

09/30/2018
by   Edgar Dobriban, et al.
0

Modern massive datasets pose an enormous computational burden to practitioners. Distributed computation has emerged as a universal approach to ease the burden: Datasets are partitioned over machines, which compute locally, and communicate short messages. Distributed data also arises due to privacy reasons, such as in medicine. It is important to study how to do statistical inference and machine learning in a distributed setting. In this paper, we study one-step parameter averaging in statistical linear models under data parallelism. We do linear regression on each machine, and take a weighted average of the parameters. How much do we lose compared to doing linear regression on the full data? Here we study the performance loss in estimation error, test error, and confidence interval length in high dimensions, where the number of parameters is comparable to the training data size. We discover several key phenomena. First, averaging is not optimal, and we find the exact performance loss. Our results are simple to use in practice. Second, different problems are affected differently by the distributed framework. Estimation error and confidence interval length increases a lot, while prediction error increases much less. These results match simulations and a data analysis example. We rely on recent results from random matrix theory, where we develop a new calculus of deterministic equivalents as a tool of broader interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2014

On the Optimality of Averaging in Distributed Statistical Learning

A common approach to statistical learning with big-data is to randomly s...
research
04/30/2020

Generalization Error for Linear Regression under Distributed Learning

Distributed learning facilitates the scaling-up of data processing by di...
research
03/22/2019

One-shot distributed ridge regression in high dimensions

In many areas, practitioners need to analyze large datasets that challen...
research
10/27/2019

On the asymptotic distribution of model averaging based on information criterion

Smoothed AIC (S-AIC) and Smoothed BIC (S-BIC) are very widely used in mo...
research
01/22/2021

Linear Regression with Distributed Learning: A Generalization Error Perspective

Distributed learning provides an attractive framework for scaling the le...
research
01/15/2023

Selective Inference with Distributed Data

Nowadays, big datasets are spread over many machines which compute in pa...
research
06/25/2015

Analyzing statistical and computational tradeoffs of estimation procedures

The recent explosion in the amount and dimensionality of data has exacer...

Please sign up or login with your details

Forgot password? Click here to reset