Max-Diversity Distributed Learning: Theory and Algorithms

12/19/2018
by   Yong Liu, et al.
0

We study the risk performance of distributed learning for the regularization empirical risk minimization with fast convergence rate, substantially improving the error analysis of the existing divide-and-conquer based distributed learning. An interesting theoretical finding is that the larger the diversity of each local estimate is, the tighter the risk bound is. This theoretical analysis motivates us to devise an effective maxdiversity distributed learning algorithm (MDD). Experimental results show that MDD can outperform the existing divide-andconquer methods but with a bit more time. Theoretical analysis and empirical results demonstrate that our proposed MDD is sound and effective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/09/2020

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
06/25/2015

Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

Fairness-aware learning is a novel framework for classification tasks. L...
research
06/05/2023

Synthesis of Distributed Protocols by Enumeration Modulo Isomorphisms

Synthesis of distributed protocols is a hard, often undecidable, problem...
research
05/03/2022

The limitations of the theoretical analysis of applied algorithms

The theoretical analysis of performance has been an important tool in th...
research
05/16/2022

Distributed Feature Selection for High-dimensional Additive Models

Distributed statistical learning is a common strategy for handling massi...
research
05/31/2023

Theoretical Analysis on the Efficiency of Interleaved Comparisons

This study presents a theoretical analysis on the efficiency of interlea...
research
06/10/2020

On Mixup Regularization

Mixup is a data augmentation technique that creates new examples as conv...

Please sign up or login with your details

Forgot password? Click here to reset