Perseus: Characterizing Performance and Cost of Multi-Tenant Serving for CNN Models

12/05/2019
by   Matthew LeMay, et al.
0

Deep learning models are increasingly used for end-user applications, supporting both novel features such as facial recognition, and traditional features, e.g. web search. To accommodate high inference throughput, it is common to host a single pre-trained Convolutional Neural Network (CNN) in dedicated cloud-based servers with hardware accelerators such as Graphics Processing Units (GPUs). However, GPUs can be orders of magnitude more expensive than traditional Central Processing Unit (CPU) servers. These resources could also be under-utilized facing dynamic workloads, which may result in inflated serving costs. One potential way to alleviate this problem is by allowing hosted models to share the underlying resources, which we refer to as multi-tenant inference serving. One of the key challenges is maximizing the resource efficiency for multi-tenant serving given hardware with diverse characteristics, models with unique response time Service Level Agreement (SLA), and dynamic inference workloads. In this paper, we present Perseus, a measurement framework that provides the basis for understanding the performance and cost trade-offs of multi-tenant model serving. We implemented Perseus in Python atop a popular cloud inference server called Nvidia TensorRT Inference Server. Leveraging Perseus, we evaluated the inference throughput and cost for serving various models and demonstrated that multi-tenant model serving led to up to 12

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2021

Multi-model Machine Learning Inference Serving with GPU Spatial Partitioning

As machine learning techniques are applied to a widening range of applic...
research
09/10/2019

Characterizing the Deep Neural Networks Inference Performance of Mobile Applications

Today's mobile applications are increasingly leveraging deep neural netw...
research
05/30/2019

INFaaS: Managed & Model-less Inference Serving

The number of applications relying on inference from machine learning mo...
research
04/02/2019

BARISTA: Efficient and Scalable Serverless Serving System for Deep Learning Prediction Services

Pre-trained deep learning models are increasingly being used to offer a ...
research
07/05/2023

Chiplet Cloud: Building AI Supercomputers for Serving Large Generative Language Models

Large language models (LLMs) such as ChatGPT have demonstrated unprecede...
research
07/04/2023

Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services

Graph Neural Networks (GNNs) have gained growing interest in miscellaneo...
research
09/04/2020

Running Neural Networks on the NIC

In this paper we show that the data plane of commodity programmable (Net...

Please sign up or login with your details

Forgot password? Click here to reset