Cross Subspace Alignment Codes for Coded Distributed Batch Matrix Multiplication

09/30/2019
by   Zhuqing Jia, et al.
0

The goal of coded distributed matrix multiplication (CDMM) is to efficiently multiply matrices A and B by distributing the computation task across S servers (through a coding scheme), such that the response from any R servers (R is called the recovery threshold) is sufficient for the user to compute AB. CDMM algorithms seek to optimize the tradeoff between six quantities of interest: recovery threshold, upload cost, download cost, encoding complexity, decoding complexity, and server computation complexity. Existing CDMM codes such as Polynomial codes, MatDot codes, PolyDot codes, Generalized PolyDot codes and Entangled Polynomial codes, all focus on multiplying one A matrix with one B matrix. Batch matrix multiplication of A_1,...,A_L with B_1,...,B_L to compute A_1B_1,...,A_LB_L can be naturally accomplished with CDMM codes by separately computing the A_l B_l products for each l∈[L]. But is it possible to do significantly better? Somewhat surprisingly, this work shows that joint coding of the batch of matrices offers significant advantages over separate coding. To this end, Cross Subspace Alignment (CSA) codes are introduced, that code across the matrices in a batch instead of partitioning each of the individual matrices as done in existing CDMM codes. Given a recovery threshold R, CSA codes have the same server computation complexity per matrix multiplication as existing CDMM codes, but CSA codes show a significant improvement over all existing CDMM codes in the tradeoff between upload-download costs. A corresponding improvement in the tradeoff between encoding and decoding complexity is also observed. The gain from batch processing is reminiscent of gains from multi-letterization in information theory, vector codes in network coding, and symbol extensions in interference alignment schemes.

READ FULL TEXT
research
09/30/2019

Cross Subspace Alignment Codes for Coded Distributed Batch Computation

Coded distributed batch computation distributes a computation task, such...
research
03/17/2021

Improved Constructions for Secure Multi-Party Batch Matrix Multiplication

This paper investigates the problem of Secure Multi-party Batch Matrix M...
research
05/17/2021

Price of Precision in Coded Distributed Matrix Multiplication: A Dimensional Analysis

Coded distributed matrix multiplication (CDMM) schemes, such as MatDot c...
research
02/18/2020

GCSA Codes with Noise Alignment for Secure Coded Multi-Party Batch Matrix Multiplication

A secure multi-party batch matrix multiplication problem (SMBMM) is cons...
research
05/13/2021

Variable Coded Batch Matrix Multiplication

In this paper, we introduce the Variable Coded Distributed Batch Matrix ...
research
10/30/2019

Uplink-Downlink Tradeoff in Secure Distributed Matrix Multiplication

In secure distributed matrix multiplication (SDMM) the multiplication AB...
research
11/27/2018

A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication

This paper has two contributions. First, we propose a novel coded matrix...

Please sign up or login with your details

Forgot password? Click here to reset