DeepAI AI Chat
Log In Sign Up

Proper-Composite Loss Functions in Arbitrary Dimensions

by   Zac Cranko, et al.

The study of a machine learning problem is in many ways is difficult to separate from the study of the loss function being used. One avenue of inquiry has been to look at these loss functions in terms of their properties as scoring rules via the proper-composite representation, in which predictions are mapped to probability distributions which are then scored via a scoring rule. However, recent research so far has primarily been concerned with analysing the (typically) finite-dimensional conditional risk problem on the output space, leaving aside the larger total risk minimisation. We generalise a number of these results to an infinite dimensional setting and in doing so we are able to exploit the familial resemblance of density and conditional density estimation to provide a simple characterisation of the canonical link.


page 1

page 2

page 3

page 4


The Convexity and Design of Composite Multiclass Losses

We consider composite loss functions for multiclass prediction comprisin...

A proper scoring rule for minimum information copulas

Multi-dimensional distributions whose marginal distributions are uniform...

On Second-Order Scoring Rules for Epistemic Uncertainty Quantification

It is well known that accurate probabilistic predictors can be trained t...

Consistency and Finite Sample Behavior of Binary Class Probability Estimation

In this work we investigate to which extent one can recover class probab...

Toward a Characterization of Loss Functions for Distribution Learning

In this work we study loss functions for learning and evaluating probabi...

All your loss are belong to Bayes

Loss functions are a cornerstone of machine learning and the starting po...

Linear Functions to the Extended Reals

This note investigates functions from ℝ^d to ℝ∪{±∞} that satisfy axioms ...