Adaptation and learning over networks under subspace constraints

05/21/2019
by   Roula Nassif, et al.
0

This paper considers optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that require the objectives across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus optimization as a special case, and allows for more general task relatedness models such as smoothness. While such formulations can be solved via projected gradient descent, the resulting algorithm is not distributed. Starting from the centralized solution, we propose an iterative and distributed implementation of the projection step, which runs in parallel with the stochastic gradient descent update. We establish that, for small step-sizes μ, the proposed distributed adaptive strategy leads to small estimation errors on the order of μ. We also examine steady-state performance. The results reveal explicitly the influence of the gradient noise, data characteristics, and subspace constraints, on the network performance. The results also show that in the small step-size regime, the iterates generated by the distributed algorithm achieve the centralized steady-state performance. Finally, we apply the proposed strategy to distributed adaptive beamforming.

READ FULL TEXT
research
05/21/2019

Adaptation and learning over networks under subspace constraints -- Part I: Stability Analysis

This paper considers optimization problems over networks where agents ha...
research
09/16/2022

Quantization for decentralized learning under subspace constraints

In this paper, we consider decentralized optimization problems where age...
research
06/01/2019

Adaptation and learning over networks under subspace constraints – Part II: Performance Analysis

Part I of this paper considered optimization problems over networks wher...
research
03/26/2019

On the Performance of Exact Diffusion over Adaptive Networks

Various bias-correction methods such as EXTRA, DIGing, and exact diffusi...
research
05/22/2018

Learning over Multitask Graphs - Part I: Stability Analysis

This paper formulates a multitask optimization problem where agents in t...
research
05/22/2018

Learning over Multitask Graphs - Part II: Performance Analysis

Part I of this paper formulated a multitask optimization problem where a...
research
03/18/2022

Dencentralized learning in the presence of low-rank noise

Observations collected by agents in a network may be unreliable due to o...

Please sign up or login with your details

Forgot password? Click here to reset