Model-driven Cluster Resource Management for AI Workloads in Edge Clouds

01/18/2022
by   Qianlin Liang, et al.
0

Since emerging edge applications such as Internet of Things (IoT) analytics and augmented reality have tight latency constraints, hardware AI accelerators have been recently proposed to speed up deep neural network (DNN) inference run by these applications. Resource-constrained edge servers and accelerators tend to be multiplexed across multiple IoT applications, introducing the potential for performance interference between latency-sensitive workloads. In this paper, we design analytic models to capture the performance of DNN inference workloads on shared edge accelerators, such as GPU and edgeTPU, under different multiplexing and concurrency behaviors. After validating our models using extensive experiments, we use them to design various cluster resource management algorithms to intelligently manage multiple applications on edge accelerators while respecting their latency constraints. We implement a prototype of our system in Kubernetes and show that our system can host 2.3X more DNN applications in heterogeneous multi-tenant edge clusters with no latency violations when compared to traditional knapsack hosting algorithms.

READ FULL TEXT
research
09/13/2019

HERALD: Optimizing Heterogeneous DNN Accelerators for Edge Devices

Recent advances in deep neural networks (DNNs) have made DNNs the backbo...
research
01/31/2023

Scheduling Inference Workloads on Distributed Edge Clusters with Reinforcement Learning

Many real-time applications (e.g., Augmented/Virtual Reality, cognitive ...
research
06/10/2022

Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators

In this paper we present Hyper-Dimensional Reconfigurable Analytics at t...
research
09/08/2021

SensiX++: Bringing MLOPs and Multi-tenant Model Serving to Sensory Edge Devices

We present SensiX++ - a multi-tenant runtime for adaptive model executio...
research
11/03/2022

iGniter: Interference-Aware GPU Resource Provisioning for Predictable DNN Inference in the Cloud

GPUs are essential to accelerating the latency-sensitive deep neural net...
research
05/08/2021

Optimising Resource Management for Embedded Machine Learning

Machine learning inference is increasingly being executed locally on mob...
research
04/29/2021

LaSS: Running Latency Sensitive Serverless Computations at the Edge

Serverless computing has emerged as a new paradigm for running short-liv...

Please sign up or login with your details

Forgot password? Click here to reset