
Decentralized Online Learning with Kernels
We consider multiagent stochastic optimization problems over reproducin...
read it

A primaldual algorithm with optimal stepsizes and its application in decentralized consensus optimization
We consider a primaldual algorithm for minimizing f(x)+h(Ax) with diffe...
read it

A PrimalDual Framework for Decentralized Stochastic Optimization
We consider the decentralized convex optimization problem, where multipl...
read it

Algorithms for stochastic optimization with expectation constraints
This paper considers the problem of minimizing an expectation function o...
read it

COKE: CommunicationCensored Kernel Learning for Decentralized Nonparametric Learning
This paper studies the decentralized optimization and learning problem w...
read it

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method
We introduce a framework for designing primal methods under the decentra...
read it

Sparse Representations of Positive Functions via Projected PseudoMirror Descent
We consider the problem of expected risk minimization when the populatio...
read it
Adaptive Kernel Learning in Heterogeneous Networks
We consider the framework of learning over decentralized networks, where nodes observe unique, possibly correlated, observation streams. We focus on the case where agents learn a regression function that belongs to a reproducing kernel Hilbert space (RKHS). In this setting, a decentralized network aims to learn nonlinear statistical models that are optimal in terms of a global stochastic convex functional that aggregates data across the network, with only access to a local data stream. We incentivize coordination while respecting network heterogeneity through the introduction of nonlinear proximity constraints. To solve it, we propose applying a functional variant of stochastic primaldual (ArrowHurwicz) method which yields a decentralized algorithm. To handle the fact that the RKHS parameterization has complexity proportionate with the iteration index, we project the primal iterates onto Hilbert subspaces that are greedily constructed from the observation sequence of each node. The resulting proximal stochastic variant of ArrowHurwicz, dubbed Heterogeneous Adaptive Learning with Kernels (HALK), is shown to converge in expectation, both in terms of primal suboptimality and constraint violation to a neighborhood that depends on a given constant stepsize selection. Simulations on a correlated spatiotemporal random field estimation problem validate our theoretical results, which are born out in practice for networked oceanic sensing buoys estimating temperature and salinity from depth measurements.
READ FULL TEXT
Comments
There are no comments yet.