DeepAI AI Chat
Log In Sign Up

TensorFlow-Serving: Flexible, High-Performance ML Serving

by   Christopher Olston, et al.

We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS^2.


page 1

page 2

page 3

page 4


Serverless Model Serving for Data Science

Machine learning (ML) is an important part of modern data science applic...

FlexServe: Deployment of PyTorch Models as Flexible REST Endpoints

The integration of artificial intelligence capabilities into modern soft...

Desiderata for next generation of ML model serving

Inference is a significant part of ML software infrastructure. Despite t...

DLHub: Model and Data Serving for Science

While the Machine Learning (ML) landscape is evolving rapidly, there has...

Scaling TensorFlow to 300 million predictions per second

We present the process of transitioning machine learning models to the T...

Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs

Google's Cloud TPUs are a promising new hardware architecture for machin...

Serving and Optimizing Machine Learning Workflows on Heterogeneous Infrastructures

With the advent of ubiquitous deployment of smart devices and the Intern...