Learning a Code: Machine Learning for Approximate Non-Linear Coded Computation

06/04/2018
by   Jack Kosaian, et al.
0

Machine learning algorithms are typically run on large scale, distributed compute infrastructure that routinely face a number of unavailabilities such as failures and temporary slowdowns. Adding redundant computations using coding-theoretic tools called "codes" is an emerging technique to alleviate the adverse effects of such unavailabilities. A code consists of an encoding function that proactively introduces redundant computation and a decoding function that reconstructs unavailable outputs using the available ones. Past work focuses on using codes to provide resilience for linear computations and specific iterative optimization algorithms. However, computations performed for a variety of applications including inference on state-of-the-art machine learning algorithms, such as neural networks, typically fall outside this realm. In this paper, we propose taking a learning-based approach to designing codes that can handle non-linear computations. We present carefully designed neural network architectures and a training methodology for learning encoding and decoding functions that produce approximate reconstructions of unavailable computation results. We present extensive experimental results demonstrating the effectiveness of the proposed approach: we show that the our learned codes can accurately reconstruct 64 - 98% of the unavailable predictions from neural-network based image classifiers on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. To the best of our knowledge, this work proposes the first learning-based approach for designing codes, and also presents the first coding-theoretic solution that can provide resilience for any non-linear (differentiable) computation. Our results show that learning can be an effective technique for designing codes, and that learned codes are a highly promising approach for bringing the benefits of coding to non-linear computations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2023

Randomized Polar Codes for Anytime Distributed Machine Learning

We present a novel distributed computing framework that is robust to slo...
research
03/13/2022

Adaptive Gap Entangled Polynomial Coding for Multi-Party Computation at the Edge

Multi-party computation (MPC) is promising for designing privacy-preserv...
research
05/14/2020

Minimizing the alphabet size of erasure codes with restricted decoding sets

A Maximum Distance Separable code over an alphabet F is defined via an e...
research
09/17/2018

C^3LES: Codes for Coded Computation that Leverage Stragglers

In distributed computing systems, it is well recognized that worker node...
research
05/30/2023

Non-linear MRD codes from cones over exterior sets

By using the notion of d-embedding Γ of a (canonical) subgeometry Σ and ...
research
05/12/2023

The Case for the Anonymization of Offloaded Computation

Computation offloading (often to external computing resources over a net...
research
03/09/2000

Linear Tabulated Resolution Based on Prolog Control Strategy

Infinite loops and redundant computations are long recognized open probl...

Please sign up or login with your details

Forgot password? Click here to reset