TensorFlow-Serving: Flexible, High-Performance ML Serving

12/17/2017
by   Christopher Olston, et al.
0

We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS^2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2021

Serverless Model Serving for Data Science

Machine learning (ML) is an important part of modern data science applic...
research
02/29/2020

FlexServe: Deployment of PyTorch Models as Flexible REST Endpoints

The integration of artificial intelligence capabilities into modern soft...
research
10/26/2022

Desiderata for next generation of ML model serving

Inference is a significant part of ML software infrastructure. Despite t...
research
11/27/2018

DLHub: Model and Data Serving for Science

While the Machine Learning (ML) landscape is evolving rapidly, there has...
research
09/20/2021

Scaling TensorFlow to 300 million predictions per second

We present the process of transitioning machine learning models to the T...
research
10/23/2018

Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs

Google's Cloud TPUs are a promising new hardware architecture for machin...
research
04/04/2023

TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings

In response to innovations in machine learning (ML) models, production w...

Please sign up or login with your details

Forgot password? Click here to reset