DeepAI AI Chat
Log In Sign Up

Efficient Protocols for Distributed Classification and Optimization

by   Hal Daumé III, et al.

In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daume III et al., 2012) proposes a general model that bounds the communication required for learning classifiers while allowing for training error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d^2 1/) words of communication to classify distributed data in arbitrary dimension d, -optimally. This readily extends to classification over k nodes with O(kd^2 1/) words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we illustrate general algorithm design paradigms for doing efficient learning over distributed data. We show how to solve fixed-dimensional and high dimensional linear programming efficiently in a distributed setting where constraints may be distributed across nodes. Since many learning problems can be viewed as convex optimization problems where constraints are generated by individual points, this models many typical distributed learning scenarios. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting. As a consequence, our methods extend to the wide range of problems solvable using these techniques.


page 1

page 2

page 3

page 4


Protocols for Learning Classifiers on Distributed Data

We consider the problem of learning classifiers for labeled data that ha...

Distributed Learning with Sublinear Communication

In distributed statistical learning, N samples are split across m machin...

On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication

In parallel and distributed machine learning multiple nodes or processor...

Learning to Communicate to Solve Riddles with Deep Distributed Recurrent Q-Networks

We propose deep distributed recurrent Q-networks (DDRQN), which enable t...

Convex Set Disjointness, Distributed Learning of Halfspaces, and LP Feasibility

We study the Convex Set Disjointness (CSD) problem, where two players ha...

Contract-connection:An efficient communication protocol for Distributed Ledger Technology

Distributed Ledger Technology (DLT) is promising to become the foundatio...

Communication Efficient Distributed Agnostic Boosting

We consider the problem of learning from distributed data in the agnosti...