Finding Label and Model Errors in Perception Data With Learned Observation Assertions

01/15/2022
by   Daniel Kang, et al.
0

ML is being deployed in complex, real-world scenarios where errors have impactful consequences. In these systems, thorough testing of the ML pipelines is critical. A key component in ML deployment pipelines is the curation of labeled training data. Common practice in the ML literature assumes that labels are the ground truth. However, in our experience in a large autonomous vehicle development center, we have found that vendors can often provide erroneous labels, which can lead to downstream safety risks in trained models. To address these issues, we propose a new abstraction, learned observation assertions, and implement it in a system called Fixy. Fixy leverages existing organizational resources, such as existing (possibly noisy) labeled datasets or previously trained ML models, to learn a probabilistic model for finding errors in human- or model-generated labels. Given user-provided features and these existing resources, Fixy learns feature distributions that specify likely and unlikely values (e.g., that a speed of 30mph is likely but 300mph is unlikely). It then uses these feature distributions to score labels for potential errors. We show that FIxy can automatically rank potential errors in real datasets with up to 2× higher precision compared to recent work on model assertions and standard techniques such as uncertainty sampling.

READ FULL TEXT

page 1

page 3

page 6

page 9

research
03/03/2020

Model Assertions for Monitoring and Improving ML Models

ML models are increasingly deployed in settings with real world interact...
research
08/14/2020

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

Many internet applications are powered by machine learned models, which ...
research
03/03/2020

Model Assertions for Monitoring and Improving ML Model

ML models are increasingly deployed in settings with real world interact...
research
11/06/2020

Underspecification Presents Challenges for Credibility in Modern Machine Learning

ML models often exhibit unexpectedly poor behavior when they are deploye...
research
09/05/2019

TFCheck : A TensorFlow Library for Detecting Training Issues in Neural Network Programs

The increasing inclusion of Machine Learning (ML) models in safety criti...
research
02/24/2022

"Is not the truth the truth?": Analyzing the Impact of User Validations for Bus In/Out Detection in Smartphone-based Surveys

Passenger flow allows the study of users' behavior through the public ne...
research
03/10/2023

Moving Fast With Broken Data

Machine learning (ML) models in production pipelines are frequently retr...

Please sign up or login with your details

Forgot password? Click here to reset