Making learning more transparent using conformalized performance prediction

07/09/2020 ∙ by Matthew J. Holland, et al. ∙ 0

In this work, we study some novel applications of conformal inference techniques to the problem of providing machine learning procedures with more transparent, accurate, and practical performance guarantees. We provide a natural extension of the traditional conformal prediction framework, done in such a way that we can make valid and well-calibrated predictive statements about the future performance of arbitrary learning algorithms, when passed an as-yet unseen training set. In addition, we include some nascent empirical examples to illustrate potential applications.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 20

page 22

page 23

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.