Making learning more transparent using conformalized performance prediction

07/09/2020
by   Matthew J. Holland, et al.
0

In this work, we study some novel applications of conformal inference techniques to the problem of providing machine learning procedures with more transparent, accurate, and practical performance guarantees. We provide a natural extension of the traditional conformal prediction framework, done in such a way that we can make valid and well-calibrated predictive statements about the future performance of arbitrary learning algorithms, when passed an as-yet unseen training set. In addition, we include some nascent empirical examples to illustrate potential applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset