ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems

09/20/2021
by   Mahdi Soleymani, et al.
0

Due to the surge of cloud-assisted AI services, the problem of designing resilient prediction serving systems that can effectively cope with stragglers/failures and minimize response delays has attracted much interest. The common approach for tackling this problem is replication which assigns the same prediction task to multiple workers. This approach, however, is very inefficient and incurs significant resource overheads. Hence, a learning-based approach known as parity model (ParM) has been recently proposed which learns models that can generate parities for a group of predictions in order to reconstruct the predictions of the slow/failed workers. While this learning-based approach is more resource-efficient than replication, it is tailored to the specific model hosted by the cloud and is particularly suitable for a small number of queries (typically less than four) and tolerating very few (mostly one) number of stragglers. Moreover, ParM does not handle Byzantine adversarial workers. We propose a different approach, named Approximate Coded Inference (ApproxIFER), that does not require training of any parity models, hence it is agnostic to the model hosted by the cloud and can be readily applied to different data domains and model architectures. Compared with earlier works, ApproxIFER can handle a general number of stragglers and scales significantly better with the number of queries. Furthermore, ApproxIFER is robust against Byzantine workers. Our extensive experiments on a large number of datasets and model architectures also show significant accuracy improvement by up to 58

READ FULL TEXT

page 7

page 8

research
05/02/2019

Parity Models: A General Framework for Coding-Based Resilience in ML Inference

Machine learning models are becoming the primary workhorses for many app...
research
06/11/2021

Coded-InvNet for Resilient Prediction Serving Systems

Inspired by a new coded computation algorithm for invertible functions, ...
research
01/16/2023

A Robust Classification Framework for Byzantine-Resilient Stochastic Gradient Descent

This paper proposes a Robust Gradient Classification Framework (RGCF) fo...
research
09/21/2020

Resilient Cloud-based Replication with Low Latency

Existing approaches to tolerate Byzantine faults in geo-replicated envir...
research
06/16/2020

Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

In Byzantine robust distributed optimization, a central server wants to ...
research
12/17/2021

Coded Consensus Monte Carlo: Robust One-Shot Distributed Bayesian Learning with Stragglers

This letter studies distributed Bayesian learning in a setting encompass...
research
03/23/2023

Trading Communication for Computation in Byzantine-Resilient Gradient Coding

We consider gradient coding in the presence of an adversary, controlling...

Please sign up or login with your details

Forgot password? Click here to reset