Understanding the Benefits of Hardware-Accelerated Communication in Model-Serving Applications

05/04/2023
by   Walid A. Hanafy, et al.
0

It is commonly assumed that the end-to-end networking performance of edge offloading is purely dictated by that of the network connectivity between end devices and edge computing facilities, where ongoing innovation in 5G/6G networking can help. However, with the growing complexity of edge-offloaded computation and dynamic load balancing requirements, an offloaded task often goes through a multi-stage pipeline that spans across multiple compute nodes and proxies interconnected via a dedicated network fabric within a given edge computing facility. As the latest hardware-accelerated transport technologies such as RDMA and GPUDirect RDMA are adopted to build such network fabric, there is a need for good understanding of the full potential of these technologies in the context of computation offload and the effect of different factors such as GPU scheduling and characteristics of computation on the net performance gain achievable by these technologies. This paper unveils detailed insights into the latency overhead in typical machine learning (ML)-based computation pipelines and analyzes the potential benefits of adopting hardware-accelerated communication. To this end, we build a model-serving framework that supports various communication mechanisms. Using the framework, we identify performance bottlenecks in state-of-the-art model-serving pipelines and show how hardware-accelerated communication can alleviate them. For example, we show that GPUDirect RDMA can save 15–50% of model-serving latency, which amounts to 70–160 ms.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
06/29/2022

Design and Optimization of Aerial-Aided Multi-Access Edge Computing towards 6G

Ubiquity in network coverage is one of the main features of 5G and is ex...
research
02/12/2019

Distributed and Application-aware Task Scheduling in Edge-clouds

Edge computing is an emerging technology which places computing at the e...
research
12/05/2018

InferLine: ML Inference Pipeline Composition Framework

The dominant cost in production machine learning workloads is not traini...
research
07/04/2022

Oakestra white paper: An Orchestrator for Edge Computing

Edge computing seeks to enable applications with strict latency requirem...
research
03/29/2021

How Far Can We Go in Compute-less Networking: Computation Correctness and Accuracy

Emerging applications such as augmented reality and the tactile Internet...
research
12/22/2021

SOLIS – The MLOps journey from data acquisition to actionable insights

Machine Learning operations is unarguably a very important and also one ...

Please sign up or login with your details

Forgot password? Click here to reset