LMStream: When Distributed Micro-Batch Stream Processing Systems Meet GPU

11/08/2021
by   Suyeon Lee, et al.
0

This paper presents LMStream, which ensures bounded latency while maximizing the throughput on the GPU-enabled micro-batch streaming systems. The main ideas behind LMStream's design can be summarized as two novel mechanisms: (1) dynamic batching and (2) dynamic operation-level query planning. By controlling the micro-batch size, LMStream significantly reduces the latency of individual dataset because it does not perform unconditional buffering only for improving GPU utilization. LMStream bounds the latency to an optimal value according to the characteristics of the window operation used in the streaming application. Dynamic mapping between a query to an execution device based on the data size and dynamic device preference improves both the throughput and latency as much as possible. In addition, LMStream proposes a low-overhead online cost model parameter optimization method without interrupting the real-time stream processing. We implemented LMStream on Apache Spark, which supports micro-batch stream processing. Compared to the previous throughput-oriented method, LMStream showed an average latency improvement up to a maximum of 70.7 improving average throughput up to 1.74x.

READ FULL TEXT

page 1

page 4

page 9

research
06/11/2023

Scheduling of Intermittent Query Processing

Stream processing is usually done either on a tuple-by-tuple basis or in...
research
10/06/2020

Move Fast and Meet Deadlines: Fine-grained Real-time Stream Processing with Cameo

Resource provisioning in multi-tenant stream processing systems faces th...
research
04/27/2021

VID-WIN: Fast Video Event Matching with Query-Aware Windowing at the Edge for the Internet of Multimedia Things

Efficient video processing is a critical component in many IoMT applicat...
research
07/08/2022

Zero-Shot Cost Models for Distributed Stream Processing

This paper proposes a learned cost estimation model for Distributed Stre...
research
05/11/2020

Performance Modeling and Vertical Autoscaling of Stream Joins

Streaming analysis is widely used in cloud as well as edge infrastructur...
research
04/08/2019

Scaling Stream Processing with Transactional State Management on Multicores

Transactional state management relieves users from managing state consis...
research
08/31/2023

SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills

Large Language Model (LLM) inference consists of two distinct phases - p...

Please sign up or login with your details

Forgot password? Click here to reset