Monitoring and explainability of models in production

by   Janis Klaise, et al.

The machine learning lifecycle extends beyond the deployment stage. Monitoring deployed models is crucial for continued provision of high quality machine learning enabled services. Key areas include model performance and data monitoring, detecting outliers and data drift using statistical techniques, and providing explanations of historic predictions. We discuss the challenges to successful implementation of solutions in each of these areas with some recent examples of production ready solutions using open source tools.


page 1

page 2

page 3

page 4


A monitoring framework for deployed machine learning models with supply chain examples

Actively monitoring machine learning models during production operations...

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

With the increasing adoption of machine learning (ML) models and systems...

Overton: A Data System for Monitoring and Improving Machine-Learned Products

We describe a system called Overton, whose main design goal is to suppor...

AI Enabled Data Quality Monitoring with Hydra

Data quality monitoring is critical to all experiments impacting the qua...

Concept for a Technical Infrastructure for Management of Predictive Models in Industrial Applications

With the increasing number of created and deployed prediction models and...

Farmer's Assistant: A Machine Learning Based Application for Agricultural Solutions

Farmers face several challenges when growing crops like uncertain irriga...

Please sign up or login with your details

Forgot password? Click here to reset