AI Total: Analyzing Security ML Models with Imperfect Data in Production

10/13/2021
by   Awalin Sopan, et al.
0

Development of new machine learning models is typically done on manually curated data sets, making them unsuitable for evaluating the models' performance during operations, where the evaluation needs to be performed automatically on incoming streams of new data. Unfortunately, pure reliance on a fully automatic pipeline for monitoring model performance makes it difficult to understand if any observed performance issues are due to model performance, pipeline issues, emerging data distribution biases, or some combination of the above. With this in mind, we developed a web-based visualization system that allows the users to quickly gather headline performance numbers while maintaining confidence that the underlying data pipeline is functioning properly. It also enables the users to immediately observe the root cause of an issue when something goes wrong. We introduce a novel way to analyze performance under data issues using a data coverage equalizer. We describe the various modifications and additional plots, filters, and drill-downs that we added on top of the standard evaluation metrics typically tracked in machine learning (ML) applications, and walk through some real world examples that proved valuable for introspecting our models.

READ FULL TEXT

page 1

page 3

page 4

research
02/22/2019

MPP: Model Performance Predictor

Operations is a key challenge in the domain of machine learning pipeline...
research
08/31/2021

Towards Observability for Machine Learning Pipelines

Software organizations are increasingly incorporating machine learning (...
research
09/20/2022

Comparative analysis of real bugs in open-source Machine Learning projects – A Registered Report

Background: Machine Learning (ML) systems rely on data to make predictio...
research
01/04/2022

Survey on the Convergence of Machine Learning and Blockchain

Machine learning (ML) has been pervasively researched nowadays and it ha...
research
11/11/2022

A monitoring framework for deployed machine learning models with supply chain examples

Actively monitoring machine learning models during production operations...
research
01/30/2023

MOSAIC, acomparison framework for machine learning models

We introduce MOSAIC, a Python program for machine learning models. Our f...
research
05/31/2023

Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed on Orbits

Proper evaluations are crucial for better understanding, troubleshooting...

Please sign up or login with your details

Forgot password? Click here to reset