FPR Estimation for Fraud Detection in the Presence of Class-Conditional Label Noise

08/04/2023
by   Justin Tittelfitz, et al.
0

We consider the problem of estimating the false-/ true-positive-rate (FPR/TPR) for a binary classification model when there are incorrect labels (label noise) in the validation set. Our motivating application is fraud prevention where accurate estimates of FPR are critical to preserving the experience for good customers, and where label noise is highly asymmetric. Existing methods seek to minimize the total error in the cleaning process - to avoid cleaning examples that are not noise, and to ensure cleaning of examples that are. This is an important measure of accuracy but insufficient to guarantee good estimates of the true FPR or TPR for a model, and we show that using the model to directly clean its own validation data leads to underestimates even if total error is low. This indicates a need for researchers to pursue methods that not only reduce total error but also seek to de-correlate cleaning error with model scores.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2020

GANs for learning from very high class conditional noisy labels

We use Generative Adversarial Networks (GANs) to design a class conditio...
research
11/15/2021

Margin-Independent Online Multiclass Learning via Convex Geometry

We consider the problem of multi-class classification, where a stream of...
research
03/05/2013

Classification with Asymmetric Label Noise: Consistency and Maximal Denoising

In many real-world classification problems, the labels of training examp...
research
05/27/2021

Training Classifiers that are Universally Robust to All Label Noise Levels

For classification tasks, deep neural networks are prone to overfitting ...
research
01/08/2019

Cost Sensitive Learning in the Presence of Symmetric Label Noise

In binary classification framework, we are interested in making cost sen...
research
03/03/2021

Statistical Hypothesis Testing for Class-Conditional Label Noise

In this work we aim to provide machine learning practitioners with tools...
research
05/24/2019

Perturbed Model Validation: A New Framework to Validate Model Relevance

This paper introduces PMV (Perturbed Model Validation), a new technique ...

Please sign up or login with your details

Forgot password? Click here to reset