A Proper Scoring Rule for Validation of Competing Risks Models

04/02/2021
by   Zoe Guan, et al.
0

Scoring rules are used to evaluate the quality of predictions that take the form of probability distributions. A scoring rule is strictly proper if its expected value is uniquely minimized by the true probability distribution. One of the most well-known and widely used strictly proper scoring rules is the logarithmic scoring rule. We propose a version of the logarithmic scoring rule for competing risks data and show that it remains strictly proper under non-informative censoring.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

12/16/2019

A new example for a proper scoring rule

We give a new example for a proper scoring rule motivated by the form of...
12/23/2020

Beyond Strictly Proper Scoring Rules: The Importance of Being Local

The evaluation of probabilistic forecasts plays a central role both in t...
10/16/2019

Multivariate Forecasting Evaluation: On Sensitive and Strictly Proper Scoring Rules

In recent years, probabilistic forecasting is an emerging topic, which i...
07/30/2013

Likelihood-ratio calibration using prior-weighted proper scoring rules

Prior-weighted logistic regression has become a standard tool for calibr...
04/09/2017

Strictly Proper Kernel Scoring Rules and Divergences with an Application to Kernel Two-Sample Hypothesis Testing

We study strictly proper scoring rules in the Reproducing Kernel Hilbert...
02/25/2020

Binary Scoring Rules that Incentivize Precision

All proper scoring rules incentivize an expert to predict accurately (re...
02/19/2019

Proper-Composite Loss Functions in Arbitrary Dimensions

The study of a machine learning problem is in many ways is difficult to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.