A Parallel Distributed Algorithm for the Power SVD Method

08/13/2021
by   Jiaying Li, et al.
0

In this work, we study how to implement a distributed algorithm for the power method in a parallel manner. As the existing distributed power method is usually sequentially updating the eigenvectors, it exhibits two obvious disadvantages: 1) when it calculates the hth eigenvector, it needs to wait for the results of previous (h-1) eigenvectors, which causes a delay in acquiring all the eigenvalues; 2) when calculating each eigenvector, it needs a certain cost of information exchange within the neighboring nodes for every power iteration, which could be unbearable when the number of eigenvectors or the number of nodes is large. This motivates us to propose a parallel distributed power method, which simultaneously calculates all the eigenvectors at each power iteration to ensure that more information could be exchanged in one shaking-hand of communication. We are particularly interested in the distributed power method for both an eigenvalue decomposition (EVD) and a singular value decomposition (SVD), wherein the distributed process is proceed based on a gossip algorithm. It can be shown that, under the same condition, the communication cost of the gossip-based parallel method is only 1/H times of that for the sequential counterpart, where H is the number of eigenvectors we want to compute, while the convergence time and error performance of the proposed parallel method are both comparable to those of its sequential counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
02/16/2022

Vectorization of a thread-parallel Jacobi singular value decomposition method

The eigenvalue decomposition (EVD) of (a batch of) Hermitian matrices of...
research
07/25/2022

A Randomized Algorithm for Tensor Singular Value Decomposition Using an Arbitrary Number of Passes

Computation of a tensor singular value decomposition (t-SVD) with a few ...
research
11/06/2017

Impact of Communication Delay on Asynchronous Distributed Optimal Power Flow Using ADMM

Distributed optimization has attracted lots of attention in the operatio...
research
06/11/2018

ATOMO: Communication-efficient Learning via Atomic Sparsification

Distributed model training suffers from communication overheads due to f...
research
01/30/2021

A Flexible Power Method for Solving Infinite Dimensional Tensor Eigenvalue Problems

We propose a flexible power method for computing the leftmost, i.e., alg...
research
10/19/2017

Early stopping for statistical inverse problems via truncated SVD estimation

We consider truncated SVD (or spectral cut-off, projection) estimators f...

Please sign up or login with your details

Forgot password? Click here to reset