Rethinking Streaming Machine Learning Evaluation

05/23/2022
by   Shreya Shankar, et al.
0

While most work on evaluating machine learning (ML) models focuses on computing accuracy on batches of data, tracking accuracy alone in a streaming setting (i.e., unbounded, timestamp-ordered datasets) fails to appropriately identify when models are performing unexpectedly. In this position paper, we discuss how the nature of streaming ML problems introduces new real-world challenges (e.g., delayed arrival of labels) and recommend additional metrics to assess streaming ML performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2022

ECAS-ML: Edge Computing Assisted Adaptation Scheme with Machine Learning for HTTP Adaptive Streaming

As the video streaming traffic in mobile networks is increasing, improvi...
research
05/03/2020

Machine Learning Pipeline for Pulsar Star Dataset

This work brings together some of the most common machine learning (ML) ...
research
01/03/2022

Supervised Learning based QoE Prediction of Video Streaming in Future Networks: A Tutorial with Comparative Study

The Quality of Experience (QoE) based service management remains key for...
research
10/07/2021

Ship Performance Monitoring using Machine-learning

The hydrodynamic performance of a sea-going ship varies over its lifespa...
research
07/02/2022

Firenze: Model Evaluation Using Weak Signals

Data labels in the security field are frequently noisy, limited, or bias...
research
05/04/2020

Demystifying a Dark Art: Understanding Real-World Machine Learning Model Development

It is well-known that the process of developing machine learning (ML) wo...
research
05/05/2020

Differential Machine Learning

Differential machine learning (ML) extends supervised learning, with mod...

Please sign up or login with your details

Forgot password? Click here to reset