-
Picket: Self-supervised Data Diagnostics for ML Pipelines
Data corruption is an impediment to modern machine learning deployments....
read it
-
A Real-World Demonstration of Machine Learning Generalizability: Intracranial Hemorrhage Detection on Head CT
Machine learning (ML) holds great promise in transforming healthcare. Wh...
read it
-
Developing and Deploying Machine Learning Pipelines against Real-Time Image Streams from the PACS
Executing machine learning (ML) pipelines on radiology images is hard du...
read it
-
Production Machine Learning Pipelines: Empirical Analysis and Optimization Opportunities
Machine learning (ML) is now commonplace, powering data-driven applicati...
read it
-
Prediction of corrosions in Gas and Oil pipelines based on the theory of records
Predictions of corrosions in pipelines are valuable. Based on the availa...
read it
-
Evolution of Scikit-Learn Pipelines with Dynamic Structured Grammatical Evolution
The deployment of Machine Learning (ML) models is a difficult and time-c...
read it
-
Auto-Validate: Unsupervised Data Validation Using Data-Domain Patterns Inferred from Data Lakes
Complex data pipelines are increasingly common in diverse applications s...
read it
Underspecification Presents Challenges for Credibility in Modern Machine Learning
ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain.
READ FULL TEXT
Comments
gshamir ∙
This work is very related, and actually proposes a method to reduce the effect of this problem:
https://arxiv.org/abs/2010.09931