
Conditional Risk Minimization for Stochastic Processes
We study the task of learning from noni.i.d. data. In particular, we ai...
read it

Consistency and Finite Sample Behavior of Binary Class Probability Estimation
In this work we investigate to which extent one can recover class probab...
read it

High Dimensional Classification through ℓ_0Penalized Empirical Risk Minimization
We consider a high dimensional binary classification problem and constru...
read it

A Simple Analysis for Expconcave Empirical Minimization with Arbitrary Convex Regularizer
In this paper, we present a simple analysis of fast rates with high pr...
read it

Learning SchattenVon Neumann Operators
We study the learnability of a class of compact operators known as Schat...
read it

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors
We propose to study the generalization error of a learned predictor ĥ in...
read it

ClassWeighted Classification: Tradeoffs and Robust Approaches
We address imbalanced classification, the problem in which a label may h...
read it
Convex Risk Minimization and Conditional Probability Estimation
This paper proves, in very general settings, that convex risk minimization is a procedure to select a unique conditional probability model determined by the classification problem. Unlike most previous work, we give results that are general enough to include cases in which no minimum exists, as occurs typically, for instance, with standard boosting algorithms. Concretely, we first show that any sequence of predictors minimizing convex risk over the source distribution will converge to this unique model when the class of predictors is linear (but potentially of infinite dimension). Secondly, we show the same result holds for empirical risk minimization whenever this class of predictors is finite dimensional, where the essential technical contribution is a normfree generalization bound.
READ FULL TEXT
Comments
There are no comments yet.