Parity Models: A General Framework for Coding-Based Resilience in ML Inference

05/02/2019
by   Jack Kosaian, et al.
0

Machine learning models are becoming the primary workhorses for many applications. Production services deploy models through prediction serving systems that take in queries and return predictions by performing inference on machine learning models. In order to scale to high query rates, prediction serving systems are run on many machines in cluster settings, and thus are prone to slowdowns and failures that inflate tail latency and cause violations of strict latency targets. Current approaches to reducing tail latency are inadequate for the latency targets of prediction serving, incur high resource overhead, or are inapplicable to the computations performed during inference. We present ParM, a novel, general framework for making use of ideas from erasure coding and machine learning to achieve low-latency, resource-efficient resilience to slowdowns and failures in prediction serving systems. ParM encodes multiple queries together into a single parity query and performs inference on the parity query using a parity model. A decoder uses the output of a parity model to reconstruct approximations of unavailable predictions. ParM uses neural networks to learn parity models that enable simple, fast encoders and decoders to reconstruct unavailable predictions for a variety of inference tasks such as image classification, speech recognition, and object localization. We build ParM atop an open-source prediction serving system and through extensive evaluation show that ParM improves overall accuracy in the face of unavailability with low latency while using 2-4× less additional resources than replication-based approaches. ParM reduces the gap between 99.9th percentile and median latency by up to 3.5× compared to approaches that use an equal amount of resources, while maintaining the same median.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/18/2022

Dynamic Network Adaptation at Inference

Machine learning (ML) inference is a real-time workload that must comply...
09/20/2021

ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems

Due to the surge of cloud-assisted AI services, the problem of designing...
04/27/2019

Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding

MLaaS (ML-as-a-Service) offerings by cloud computing platforms are becom...
10/14/2018

PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems

Machine Learning models are often composed of pipelines of transformatio...
06/03/2020

Serving DNNs like Clockwork: Performance Predictability from the Bottom Up

Machine learning inference is becoming a core building block for interac...
06/05/2019

Collage Inference: Achieving low tail latency during distributed image classification using coded redundancy models

Reducing the latency variance in machine learning inference is a key req...
08/21/2020

Towards Designing a Self-Managed Machine Learning Inference Serving System inPublic Cloud

We are witnessing an increasing trend towardsusing Machine Learning (ML)...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.