Towards Inference Delivery Networks: Distributing Machine Learning with Optimality Guarantees

05/06/2021
by   Tareq Si Salem, et al.
0

We present the novel idea of inference delivery networks (IDN), networks of computing nodes that coordinate to satisfy inference requests achieving the best trade-off between latency and accuracy. IDNs bridge the dichotomy between device and cloud execution by integrating inference delivery at the various tiers of the infrastructure continuum (access, edge, regional data center, cloud). We propose a distributed dynamic policy for ML model allocation in an IDN by which each node periodically updates its local set of inference models based on requests observed during the recent past plus limited information exchange with its neighbor nodes. Our policy offers strong performance guarantees in an adversarial setting and shows improvements over greedy heuristics with similar complexity in realistic scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2022

Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine Model Agreement

Marketplaces for machine learning (ML) models are emerging as a way for ...
research
09/05/2020

Infrastructure de Services Cloud FaaS sur noeuds IoT

In this article, we describe the PyCloudIoT cloud infrastructure. PyClou...
research
04/27/2019

Collage Inference: Tolerating Stragglers in Distributed Neural Network Inference using Coding

MLaaS (ML-as-a-Service) offerings by cloud computing platforms are becom...
research
08/31/2022

A case study of the profit-maximizing multi-vehicle pickup and delivery selection problem for the road networks with the integratable nodes

This paper is a study of an application-based model in profit-maximizing...
research
12/25/2018

JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

Recent years have witnessed a rapid growth of deep-network based service...

Please sign up or login with your details

Forgot password? Click here to reset