Distributed Computing With the Cloud

09/27/2021
by   Yehuda Afek, et al.
0

We investigate the effect of omnipresent cloud storage on distributed computing. We specify a network model with links of prescribed bandwidth that connect standard processing nodes, and, in addition, passive storage nodes. Each passive node represents a cloud storage system, such as Dropbox, Google Drive etc. We study a few tasks in this model, assuming a single cloud node connected to all other nodes, which are connected to each other arbitrarily. We give implementations for basic tasks of collaboratively writing to and reading from the cloud, and for more advanced applications such as matrix multiplication and federated learning. Our results show that utilizing node-cloud links as well as node-node links can considerably speed up computations, compared to the case where processors communicate either only through the cloud or only through the network links. We provide results for general directed graphs, and for graphs with “fat” links between processing nodes. For the general case, we provide optimal algorithms for uploading and downloading files using flow techniques. We use these primitives to derive algorithms for combining, where every processor node has an input value and the task is to compute a combined value under some given associative operator. In the case of fat links, we assume that links between processors are bidirectional and have high bandwidth, and we give near-optimal algorithms for any commutative combining operator (such as vector addition). For the task of matrix multiplication (or other non-commutative combining operators), where the inputs are ordered, we present sharp results in the simple “wheel” network, where procesing nodes are arranged in a ring, and are all connected to a single cloud node.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2021

Efficient distributed algorithms for Convolutional Neural Networks

Several efficient distributed algorithms have been developed for matrix-...
research
11/06/2018

OverSketch: Approximate Matrix Multiplication for the Cloud

We propose OverSketch, an approximate algorithm for distributed matrix m...
research
07/17/2023

Optimizing Distributed Tensor Contractions using Node-Aware Processor Grids

We propose an algorithm that aims at minimizing the inter-node communica...
research
06/01/2023

Some New Non-Commutative Matrix Multiplication Algorithms of Size (n,m,6)

For various 2≤ n,m ≤ 6, we propose some new algorithms for multiplying a...
research
11/28/2018

An Application of Storage-Optimal MatDot Codes for Coded Matrix Multiplication: Fast k-Nearest Neighbors Estimation

We propose a novel application of coded computing to the problem of the ...
research
12/25/2021

On computing HITS ExpertRank via lumping the hub matrix

The dangling nodes is the nodes with no out-links in the web graph. It s...
research
07/23/2021

Comments on lumping the Google matrix

On the case that the number of dangling nodes is large, PageRank computa...

Please sign up or login with your details

Forgot password? Click here to reset