On the Performance and Convergence of Distributed Stream Processing via Approximate Fault Tolerance

11/12/2018
by   Zhinan Cheng, et al.
0

Fault tolerance is critical for distributed stream processing systems, yet achieving error-free fault tolerance often incurs substantial performance overhead. We present AF-Stream, a distributed stream processing system that addresses the trade-off between performance and accuracy in fault tolerance. AF-Stream builds on a notion called approximate fault tolerance, whose idea is to mitigate backup overhead by adaptively issuing backups, while ensuring that the errors upon failures are bounded with theoretical guarantees. Our AF-Stream design provides an extensible programming model for incorporating general streaming algorithms as well as exports only few threshold parameters for configuring approximation fault tolerance. Furthermore, we formally prove that AF-Stream preserves high algorithm-specific accuracy of streaming algorithms, and in particular the convergence guarantees of online learning. Experiments show that AF-Stream maintains high performance (compared to no fault tolerance) and high accuracy after multiple failures (compared to no failures) under various streaming algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2021

Chiron: Optimizing Fault Tolerance in QoS-aware Distributed Stream Processing Jobs

Fault tolerance is a property which needs deeper consideration when deal...
research
04/05/2021

ECRM: Efficient Fault Tolerance for Recommendation Model Training via Erasure Coding

Deep-learning-based recommendation models (DLRMs) are widely deployed to...
research
08/03/2020

A Survey on the Evolution of Stream Processing Systems

Stream processing has been an active research field for more than 20 yea...
research
07/14/2019

Delivery, consistency, and determinism: rethinking guarantees in distributed stream processing

Consistency requirements for state-of-the-art stream processing systems ...
research
09/15/2023

Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates

Oobleck enables resilient distributed training of large DNN models with ...
research
06/05/2023

Better Write Amplification for Streaming Data Processing

Many current applications have to perform data processing in a streaming...
research
10/17/2018

Fault Tolerance in Iterative-Convergent Machine Learning

Machine learning (ML) training algorithms often possess an inherent self...

Please sign up or login with your details

Forgot password? Click here to reset