Optimal solutions to the isotonic regression problem

04/09/2019
by   Alexander I. Jordan, et al.
0

In general, the solution to a regression problem is the minimizer of a given loss criterion, and as such depends on the specified loss function. The non-parametric isotonic regression problem is special, in that optimal solutions can be found by solely specifying a functional. These solutions will then be minimizers under all loss functions simultaneously as long as the loss functions have the requested functional as the Bayes act. The functional may be set-valued. The only requirement is that it can be defined via an identification function, with examples including the expectation, quantile, and expectile functionals. Generalizing classical results, we characterize the optimal solutions to the isotonic regression problem for such functionals in the case of totally and partially ordered explanatory variables. For total orders, we show that any solution resulting from the pool-adjacent-violators (PAV) algorithm is optimal. It is noteworthy, that simultaneous optimality is unattainable in the unimodal regression problem, despite its close connection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset