Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation

11/14/2013
by   Ohad Shamir, et al.
0

Many machine learning approaches are characterized by information constraints on how they interact with the training data. These include memory and sequential access constraints (e.g. fast first-order methods to solve stochastic optimization problems); communication constraints (e.g. distributed learning); partial access to the underlying data (e.g. missing features and multi-armed bandits) and more. However, currently we have little understanding how such information constraints fundamentally affect our performance, independent of the learning problem semantics. For example, are there learning problems where any algorithm which has small memory footprint (or can use any bounded number of bits from each example, or has certain communication constraints) will perform worse than what is possible without such constraints? In this paper, we describe how a single set of results implies positive answers to the above, for several different settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2019

Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits

Best arm identification (or, pure exploration) in multi-armed bandits is...
research
03/04/2018

Detecting Correlations with Little Memory and Communication

We study the problem of identifying correlations in multivariate data, u...
research
02/15/2023

Genetic multi-armed bandits: a reinforcement learning approach for discrete optimization via simulation

This paper proposes a new algorithm, referred to as GMAB, that combines ...
research
03/03/2020

Bounded Regret for Finitely Parameterized Multi-Armed Bandits

We consider the problem of finitely parameterized multi-armed bandits wh...
research
04/11/2019

Robust Coreset Construction for Distributed Machine Learning

Motivated by the need of solving machine learning problems over distribu...
research
12/08/2021

SASG: Sparsification with Adaptive Stochastic Gradients for Communication-efficient Distributed Learning

Stochastic optimization algorithms implemented on distributed computing ...
research
11/07/2016

Learning from Untrusted Data

The vast majority of theoretical results in machine learning and statist...

Please sign up or login with your details

Forgot password? Click here to reset