DeepAI AI Chat
Log In Sign Up

TensorFlow-Serving: Flexible, High-Performance ML Serving

12/17/2017
by   Christopher Olston, et al.
Google
0

We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS^2.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/04/2021

Serverless Model Serving for Data Science

Machine learning (ML) is an important part of modern data science applic...
02/29/2020

FlexServe: Deployment of PyTorch Models as Flexible REST Endpoints

The integration of artificial intelligence capabilities into modern soft...
10/26/2022

Desiderata for next generation of ML model serving

Inference is a significant part of ML software infrastructure. Despite t...
11/27/2018

DLHub: Model and Data Serving for Science

While the Machine Learning (ML) landscape is evolving rapidly, there has...
09/20/2021

Scaling TensorFlow to 300 million predictions per second

We present the process of transitioning machine learning models to the T...
10/23/2018

Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs

Google's Cloud TPUs are a promising new hardware architecture for machin...
05/10/2022

Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures

With the advent of ubiquitous deployment of smart devices and the Intern...