COKE: Communication-Censored Kernel Learning for Decentralized Non-parametric Learning

01/28/2020
by   Ping Xu, et al.
0

This paper studies the decentralized optimization and learning problem where multiple interconnected agents aim to learn an optimal decision function defined over a reproducing kernel Hilbert (RKH) space by jointly minimizing a global objective function, with access to locally observed data only. As a non-parametric approach, kernel learning faces a major challenge in distributed implementation: the decision variables of local objective functions are data-dependent with different sizes and thus cannot be optimized under the decentralized consensus framework without any raw data exchange among agents. To circumvent this major challenge and preserve data privacy, we leverage the random feature (RF) approximation approach to map the large-volume data represented in the RKH space into a smaller RF space, which facilitates the same-size parameter exchange and enables distributed agents to reach consensus on the function decided by the parameters in the RF space. For fast convergent implementation, we design an iterative algorithm for Decentralized Kernel Learning via Alternating direction method of multipliers (DKLA). Further, we develop a COmmunication-censored KErnel learning (COKE) algorithm to reduce the communication load in DKLA. To do so, we apply a communication-censoring strategy, which prevents an agent from transmitting at every iteration unless its local updates are deemed informative. Theoretical results in terms of linear convergence guarantee and generalization performance analysis of DKLA and COKE are provided. Comprehensive tests with both synthetic and real datasets are conducted to verify the communication efficiency and learning effectiveness of COKE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/04/2022

QC-ODKLA: Quantized and Communication-Censored Online Decentralized Kernel Learning via Linearized ADMM

This paper focuses on online kernel learning over a decentralized networ...
research
11/29/2022

A Decentralized Framework for Kernel PCA with Projection Consensus Constraints

This paper studies kernel PCA in a decentralized setting, where data are...
research
05/05/2023

Decentralized diffusion-based learning under non-parametric limited prior knowledge

We study the problem of diffusion-based network learning of a nonlinear ...
research
09/06/2023

Adaptive Consensus: A network pruning approach for decentralized optimization

We consider network-based decentralized optimization problems, where eac...
research
01/28/2023

Decentralized Entropic Optimal Transport for Privacy-preserving Distributed Distribution Comparison

Privacy-preserving distributed distribution comparison measures the dist...
research
10/11/2017

Decentralized Online Learning with Kernels

We consider multi-agent stochastic optimization problems over reproducin...
research
08/01/2019

Adaptive Kernel Learning in Heterogeneous Networks

We consider the framework of learning over decentralized networks, where...

Please sign up or login with your details

Forgot password? Click here to reset